Jun 25 16:18:26.899274 kernel: Linux version 6.1.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20230826 p7) 13.2.1 20230826, GNU ld (Gentoo 2.40 p5) 2.40.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 13:16:37 -00 2024 Jun 25 16:18:26.899293 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:18:26.899303 kernel: BIOS-provided physical RAM map: Jun 25 16:18:26.899309 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 25 16:18:26.899315 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 25 16:18:26.899325 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 25 16:18:26.899332 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Jun 25 16:18:26.899339 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Jun 25 16:18:26.899344 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 25 16:18:26.899352 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 25 16:18:26.899358 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jun 25 16:18:26.899364 kernel: NX (Execute Disable) protection: active Jun 25 16:18:26.899369 kernel: SMBIOS 2.8 present. Jun 25 16:18:26.899376 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jun 25 16:18:26.899383 kernel: Hypervisor detected: KVM Jun 25 16:18:26.899391 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 25 16:18:26.899397 kernel: kvm-clock: using sched offset of 3084744806 cycles Jun 25 16:18:26.899404 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 25 16:18:26.899414 kernel: tsc: Detected 2794.750 MHz processor Jun 25 16:18:26.899420 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 16:18:26.899427 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 16:18:26.899434 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Jun 25 16:18:26.899440 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 16:18:26.899448 kernel: Using GB pages for direct mapping Jun 25 16:18:26.899455 kernel: ACPI: Early table checksum verification disabled Jun 25 16:18:26.899461 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Jun 25 16:18:26.899468 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:18:26.899474 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:18:26.899481 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:18:26.899487 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jun 25 16:18:26.899494 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:18:26.899500 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:18:26.899508 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 25 16:18:26.899515 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Jun 25 16:18:26.899521 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Jun 25 16:18:26.899528 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jun 25 16:18:26.899534 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Jun 25 16:18:26.899541 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Jun 25 16:18:26.899547 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Jun 25 16:18:26.899554 kernel: No NUMA configuration found Jun 25 16:18:26.899564 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Jun 25 16:18:26.899571 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Jun 25 16:18:26.899578 kernel: Zone ranges: Jun 25 16:18:26.899585 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 16:18:26.899592 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Jun 25 16:18:26.899599 kernel: Normal empty Jun 25 16:18:26.899606 kernel: Movable zone start for each node Jun 25 16:18:26.899614 kernel: Early memory node ranges Jun 25 16:18:26.899621 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 25 16:18:26.899631 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Jun 25 16:18:26.899638 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Jun 25 16:18:26.899647 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 16:18:26.899654 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 25 16:18:26.899661 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Jun 25 16:18:26.899668 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 25 16:18:26.899675 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 25 16:18:26.899683 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 25 16:18:26.899690 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 25 16:18:26.899697 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 25 16:18:26.899704 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 25 16:18:26.899711 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 25 16:18:26.899718 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 25 16:18:26.899725 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 16:18:26.899732 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 25 16:18:26.899739 kernel: TSC deadline timer available Jun 25 16:18:26.899747 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jun 25 16:18:26.899754 kernel: kvm-guest: KVM setup pv remote TLB flush Jun 25 16:18:26.899761 kernel: kvm-guest: setup PV sched yield Jun 25 16:18:26.899767 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Jun 25 16:18:26.899774 kernel: Booting paravirtualized kernel on KVM Jun 25 16:18:26.899781 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 16:18:26.899788 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jun 25 16:18:26.899795 kernel: percpu: Embedded 57 pages/cpu s194792 r8192 d30488 u524288 Jun 25 16:18:26.899802 kernel: pcpu-alloc: s194792 r8192 d30488 u524288 alloc=1*2097152 Jun 25 16:18:26.899810 kernel: pcpu-alloc: [0] 0 1 2 3 Jun 25 16:18:26.899817 kernel: kvm-guest: PV spinlocks enabled Jun 25 16:18:26.899824 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 25 16:18:26.899831 kernel: Fallback order for Node 0: 0 Jun 25 16:18:26.899849 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Jun 25 16:18:26.899856 kernel: Policy zone: DMA32 Jun 25 16:18:26.899864 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:18:26.899872 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 16:18:26.899880 kernel: random: crng init done Jun 25 16:18:26.899887 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 16:18:26.899894 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 16:18:26.899901 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 16:18:26.899908 kernel: Memory: 2430544K/2571756K available (12293K kernel code, 2301K rwdata, 19992K rodata, 47156K init, 4308K bss, 140952K reserved, 0K cma-reserved) Jun 25 16:18:26.899915 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jun 25 16:18:26.899922 kernel: ftrace: allocating 36080 entries in 141 pages Jun 25 16:18:26.899929 kernel: ftrace: allocated 141 pages with 4 groups Jun 25 16:18:26.899936 kernel: Dynamic Preempt: voluntary Jun 25 16:18:26.899944 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 16:18:26.899952 kernel: rcu: RCU event tracing is enabled. Jun 25 16:18:26.899959 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jun 25 16:18:26.899966 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 16:18:26.899973 kernel: Rude variant of Tasks RCU enabled. Jun 25 16:18:26.899980 kernel: Tracing variant of Tasks RCU enabled. Jun 25 16:18:26.899987 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 16:18:26.899997 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jun 25 16:18:26.900004 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jun 25 16:18:26.900012 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 16:18:26.900019 kernel: Console: colour VGA+ 80x25 Jun 25 16:18:26.900026 kernel: printk: console [ttyS0] enabled Jun 25 16:18:26.900033 kernel: ACPI: Core revision 20220331 Jun 25 16:18:26.900040 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jun 25 16:18:26.900049 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 16:18:26.900056 kernel: x2apic enabled Jun 25 16:18:26.900072 kernel: Switched APIC routing to physical x2apic. Jun 25 16:18:26.900079 kernel: kvm-guest: setup PV IPIs Jun 25 16:18:26.900086 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 25 16:18:26.900095 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jun 25 16:18:26.900103 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jun 25 16:18:26.900110 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jun 25 16:18:26.900116 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jun 25 16:18:26.900123 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jun 25 16:18:26.900130 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 16:18:26.900137 kernel: Spectre V2 : Mitigation: Retpolines Jun 25 16:18:26.900145 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 16:18:26.900158 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 25 16:18:26.900165 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jun 25 16:18:26.900173 kernel: RETBleed: Mitigation: untrained return thunk Jun 25 16:18:26.900181 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 25 16:18:26.900189 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 25 16:18:26.900196 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 25 16:18:26.900203 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 25 16:18:26.900211 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 25 16:18:26.900218 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 25 16:18:26.900227 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jun 25 16:18:26.900235 kernel: Freeing SMP alternatives memory: 32K Jun 25 16:18:26.900242 kernel: pid_max: default: 32768 minimum: 301 Jun 25 16:18:26.900249 kernel: LSM: Security Framework initializing Jun 25 16:18:26.900256 kernel: SELinux: Initializing. Jun 25 16:18:26.900263 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 16:18:26.900271 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 16:18:26.900278 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jun 25 16:18:26.900288 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:18:26.900295 kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Jun 25 16:18:26.900302 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:18:26.900310 kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Jun 25 16:18:26.900317 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 16:18:26.900324 kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Jun 25 16:18:26.900331 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jun 25 16:18:26.900339 kernel: ... version: 0 Jun 25 16:18:26.900346 kernel: ... bit width: 48 Jun 25 16:18:26.900355 kernel: ... generic registers: 6 Jun 25 16:18:26.900364 kernel: ... value mask: 0000ffffffffffff Jun 25 16:18:26.900372 kernel: ... max period: 00007fffffffffff Jun 25 16:18:26.900379 kernel: ... fixed-purpose events: 0 Jun 25 16:18:26.900386 kernel: ... event mask: 000000000000003f Jun 25 16:18:26.900393 kernel: signal: max sigframe size: 1776 Jun 25 16:18:26.900400 kernel: rcu: Hierarchical SRCU implementation. Jun 25 16:18:26.900408 kernel: rcu: Max phase no-delay instances is 400. Jun 25 16:18:26.900415 kernel: smp: Bringing up secondary CPUs ... Jun 25 16:18:26.900422 kernel: x86: Booting SMP configuration: Jun 25 16:18:26.900431 kernel: .... node #0, CPUs: #1 #2 #3 Jun 25 16:18:26.900438 kernel: smp: Brought up 1 node, 4 CPUs Jun 25 16:18:26.900445 kernel: smpboot: Max logical packages: 1 Jun 25 16:18:26.900453 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jun 25 16:18:26.900460 kernel: devtmpfs: initialized Jun 25 16:18:26.900467 kernel: x86/mm: Memory block size: 128MB Jun 25 16:18:26.900475 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 16:18:26.900482 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jun 25 16:18:26.900490 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 16:18:26.900498 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 16:18:26.900506 kernel: audit: initializing netlink subsys (disabled) Jun 25 16:18:26.900513 kernel: audit: type=2000 audit(1719332306.372:1): state=initialized audit_enabled=0 res=1 Jun 25 16:18:26.900523 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 16:18:26.900530 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 16:18:26.900537 kernel: cpuidle: using governor menu Jun 25 16:18:26.900545 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 16:18:26.900552 kernel: dca service started, version 1.12.1 Jun 25 16:18:26.900559 kernel: PCI: Using configuration type 1 for base access Jun 25 16:18:26.900568 kernel: PCI: Using configuration type 1 for extended access Jun 25 16:18:26.900575 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 16:18:26.900583 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 16:18:26.900590 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 16:18:26.900597 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 16:18:26.900605 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 16:18:26.900612 kernel: ACPI: Added _OSI(Module Device) Jun 25 16:18:26.900619 kernel: ACPI: Added _OSI(Processor Device) Jun 25 16:18:26.900627 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 16:18:26.900635 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 16:18:26.900643 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 16:18:26.900650 kernel: ACPI: Interpreter enabled Jun 25 16:18:26.900657 kernel: ACPI: PM: (supports S0 S3 S5) Jun 25 16:18:26.900664 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 16:18:26.900672 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 16:18:26.900679 kernel: PCI: Using E820 reservations for host bridge windows Jun 25 16:18:26.900686 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jun 25 16:18:26.900693 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 25 16:18:26.900855 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 25 16:18:26.900870 kernel: acpiphp: Slot [3] registered Jun 25 16:18:26.900878 kernel: acpiphp: Slot [4] registered Jun 25 16:18:26.900885 kernel: acpiphp: Slot [5] registered Jun 25 16:18:26.900892 kernel: acpiphp: Slot [6] registered Jun 25 16:18:26.900900 kernel: acpiphp: Slot [7] registered Jun 25 16:18:26.900907 kernel: acpiphp: Slot [8] registered Jun 25 16:18:26.900914 kernel: acpiphp: Slot [9] registered Jun 25 16:18:26.900924 kernel: acpiphp: Slot [10] registered Jun 25 16:18:26.900931 kernel: acpiphp: Slot [11] registered Jun 25 16:18:26.900939 kernel: acpiphp: Slot [12] registered Jun 25 16:18:26.900946 kernel: acpiphp: Slot [13] registered Jun 25 16:18:26.900953 kernel: acpiphp: Slot [14] registered Jun 25 16:18:26.900960 kernel: acpiphp: Slot [15] registered Jun 25 16:18:26.900967 kernel: acpiphp: Slot [16] registered Jun 25 16:18:26.900975 kernel: acpiphp: Slot [17] registered Jun 25 16:18:26.900982 kernel: acpiphp: Slot [18] registered Jun 25 16:18:26.900989 kernel: acpiphp: Slot [19] registered Jun 25 16:18:26.900998 kernel: acpiphp: Slot [20] registered Jun 25 16:18:26.901005 kernel: acpiphp: Slot [21] registered Jun 25 16:18:26.901012 kernel: acpiphp: Slot [22] registered Jun 25 16:18:26.901020 kernel: acpiphp: Slot [23] registered Jun 25 16:18:26.901027 kernel: acpiphp: Slot [24] registered Jun 25 16:18:26.901034 kernel: acpiphp: Slot [25] registered Jun 25 16:18:26.901041 kernel: acpiphp: Slot [26] registered Jun 25 16:18:26.901048 kernel: acpiphp: Slot [27] registered Jun 25 16:18:26.901055 kernel: acpiphp: Slot [28] registered Jun 25 16:18:26.901073 kernel: acpiphp: Slot [29] registered Jun 25 16:18:26.901080 kernel: acpiphp: Slot [30] registered Jun 25 16:18:26.901087 kernel: acpiphp: Slot [31] registered Jun 25 16:18:26.901094 kernel: PCI host bridge to bus 0000:00 Jun 25 16:18:26.901191 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 25 16:18:26.901262 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 25 16:18:26.901329 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 25 16:18:26.901395 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Jun 25 16:18:26.901464 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jun 25 16:18:26.901530 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 25 16:18:26.901634 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jun 25 16:18:26.901726 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jun 25 16:18:26.901843 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jun 25 16:18:26.901923 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Jun 25 16:18:26.902191 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jun 25 16:18:26.902267 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jun 25 16:18:26.902344 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jun 25 16:18:26.902418 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jun 25 16:18:26.902512 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jun 25 16:18:26.902587 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jun 25 16:18:26.902663 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jun 25 16:18:26.902761 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Jun 25 16:18:26.902849 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jun 25 16:18:26.902925 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jun 25 16:18:26.902999 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jun 25 16:18:26.903089 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 25 16:18:26.903182 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Jun 25 16:18:26.903262 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Jun 25 16:18:26.903340 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jun 25 16:18:26.903414 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jun 25 16:18:26.903502 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jun 25 16:18:26.903594 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jun 25 16:18:26.903672 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jun 25 16:18:26.903747 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jun 25 16:18:26.903957 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Jun 25 16:18:26.904093 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Jun 25 16:18:26.904204 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jun 25 16:18:26.904281 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jun 25 16:18:26.904353 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jun 25 16:18:26.904373 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 25 16:18:26.904381 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 25 16:18:26.904388 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 25 16:18:26.904399 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 25 16:18:26.904407 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 25 16:18:26.904414 kernel: iommu: Default domain type: Translated Jun 25 16:18:26.904421 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 16:18:26.904429 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 16:18:26.904436 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 16:18:26.904444 kernel: PTP clock support registered Jun 25 16:18:26.904451 kernel: PCI: Using ACPI for IRQ routing Jun 25 16:18:26.904458 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 25 16:18:26.904465 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 25 16:18:26.904474 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Jun 25 16:18:26.904549 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jun 25 16:18:26.904621 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jun 25 16:18:26.904699 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 25 16:18:26.904708 kernel: vgaarb: loaded Jun 25 16:18:26.904716 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jun 25 16:18:26.904724 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jun 25 16:18:26.904731 kernel: clocksource: Switched to clocksource kvm-clock Jun 25 16:18:26.904741 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 16:18:26.904748 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 16:18:26.904756 kernel: pnp: PnP ACPI init Jun 25 16:18:26.904855 kernel: pnp 00:02: [dma 2] Jun 25 16:18:26.904867 kernel: pnp: PnP ACPI: found 6 devices Jun 25 16:18:26.904876 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 16:18:26.904885 kernel: NET: Registered PF_INET protocol family Jun 25 16:18:26.904893 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 16:18:26.904905 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 25 16:18:26.904914 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 16:18:26.904923 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 16:18:26.904931 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 25 16:18:26.904939 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 25 16:18:26.904945 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 16:18:26.904951 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 16:18:26.904957 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 16:18:26.904964 kernel: NET: Registered PF_XDP protocol family Jun 25 16:18:26.905041 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 25 16:18:26.905123 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 25 16:18:26.905201 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 25 16:18:26.905262 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Jun 25 16:18:26.905418 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jun 25 16:18:26.905517 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jun 25 16:18:26.905641 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 25 16:18:26.905657 kernel: PCI: CLS 0 bytes, default 64 Jun 25 16:18:26.905668 kernel: Initialise system trusted keyrings Jun 25 16:18:26.905675 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 25 16:18:26.905681 kernel: Key type asymmetric registered Jun 25 16:18:26.905687 kernel: Asymmetric key parser 'x509' registered Jun 25 16:18:26.905693 kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed Jun 25 16:18:26.905700 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jun 25 16:18:26.905706 kernel: io scheduler mq-deadline registered Jun 25 16:18:26.905712 kernel: io scheduler kyber registered Jun 25 16:18:26.905718 kernel: io scheduler bfq registered Jun 25 16:18:26.905726 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 16:18:26.905735 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jun 25 16:18:26.905746 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jun 25 16:18:26.905755 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jun 25 16:18:26.905763 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 16:18:26.905774 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 16:18:26.905784 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 25 16:18:26.905795 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 25 16:18:26.905802 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 25 16:18:26.905962 kernel: rtc_cmos 00:05: RTC can wake from S4 Jun 25 16:18:26.905977 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 25 16:18:26.906046 kernel: rtc_cmos 00:05: registered as rtc0 Jun 25 16:18:26.906136 kernel: rtc_cmos 00:05: setting system clock to 2024-06-25T16:18:26 UTC (1719332306) Jun 25 16:18:26.906207 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jun 25 16:18:26.906220 kernel: NET: Registered PF_INET6 protocol family Jun 25 16:18:26.906227 kernel: Segment Routing with IPv6 Jun 25 16:18:26.906235 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 16:18:26.906248 kernel: NET: Registered PF_PACKET protocol family Jun 25 16:18:26.906256 kernel: Key type dns_resolver registered Jun 25 16:18:26.906263 kernel: IPI shorthand broadcast: enabled Jun 25 16:18:26.906270 kernel: sched_clock: Marking stable (668041185, 105428088)->(796848587, -23379314) Jun 25 16:18:26.906276 kernel: registered taskstats version 1 Jun 25 16:18:26.906282 kernel: Loading compiled-in X.509 certificates Jun 25 16:18:26.906289 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.1.95-flatcar: c37bb6ef57220bb1c07535cfcaa08c84d806a137' Jun 25 16:18:26.906295 kernel: Key type .fscrypt registered Jun 25 16:18:26.906301 kernel: Key type fscrypt-provisioning registered Jun 25 16:18:26.906309 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 16:18:26.906315 kernel: ima: Allocated hash algorithm: sha1 Jun 25 16:18:26.906321 kernel: ima: No architecture policies found Jun 25 16:18:26.906327 kernel: clk: Disabling unused clocks Jun 25 16:18:26.906333 kernel: Freeing unused kernel image (initmem) memory: 47156K Jun 25 16:18:26.906340 kernel: Write protecting the kernel read-only data: 34816k Jun 25 16:18:26.906346 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jun 25 16:18:26.906352 kernel: Freeing unused kernel image (rodata/data gap) memory: 488K Jun 25 16:18:26.906362 kernel: Run /init as init process Jun 25 16:18:26.906371 kernel: with arguments: Jun 25 16:18:26.906379 kernel: /init Jun 25 16:18:26.906388 kernel: with environment: Jun 25 16:18:26.906396 kernel: HOME=/ Jun 25 16:18:26.906404 kernel: TERM=linux Jun 25 16:18:26.906410 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 16:18:26.906431 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 16:18:26.906441 systemd[1]: Detected virtualization kvm. Jun 25 16:18:26.906449 systemd[1]: Detected architecture x86-64. Jun 25 16:18:26.906458 systemd[1]: Running in initrd. Jun 25 16:18:26.906468 systemd[1]: No hostname configured, using default hostname. Jun 25 16:18:26.906477 systemd[1]: Hostname set to . Jun 25 16:18:26.906487 systemd[1]: Initializing machine ID from VM UUID. Jun 25 16:18:26.906496 systemd[1]: Queued start job for default target initrd.target. Jun 25 16:18:26.906506 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:18:26.906515 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:18:26.906523 systemd[1]: Reached target paths.target - Path Units. Jun 25 16:18:26.906530 systemd[1]: Reached target slices.target - Slice Units. Jun 25 16:18:26.906536 systemd[1]: Reached target swap.target - Swaps. Jun 25 16:18:26.906543 systemd[1]: Reached target timers.target - Timer Units. Jun 25 16:18:26.906551 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 16:18:26.906558 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 16:18:26.906566 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jun 25 16:18:26.906573 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 16:18:26.906580 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 16:18:26.906587 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:18:26.906594 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 16:18:26.906603 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:18:26.906613 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 16:18:26.906623 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 16:18:26.906635 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 16:18:26.906644 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 16:18:26.906654 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 16:18:26.906663 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 16:18:26.906670 systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console... Jun 25 16:18:26.906679 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:18:26.906688 kernel: audit: type=1130 audit(1719332306.897:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:26.906696 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 16:18:26.906708 systemd-journald[194]: Journal started Jun 25 16:18:26.906747 systemd-journald[194]: Runtime Journal (/run/log/journal/fa08475784ee45ccbac9bcd3101670cb) is 6.0M, max 48.4M, 42.3M free. Jun 25 16:18:26.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:26.905952 systemd-modules-load[196]: Inserted module 'overlay' Jun 25 16:18:26.959082 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 16:18:26.959105 kernel: Bridge firewalling registered Jun 25 16:18:26.959115 kernel: audit: type=1130 audit(1719332306.953:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:26.959124 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 16:18:26.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:26.931149 systemd-modules-load[196]: Inserted module 'br_netfilter' Jun 25 16:18:26.964479 kernel: audit: type=1130 audit(1719332306.960:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:26.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:26.961374 systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 16:18:26.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:26.970870 kernel: audit: type=1130 audit(1719332306.964:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:26.970896 kernel: SCSI subsystem initialized Jun 25 16:18:26.974092 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 16:18:26.974844 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 16:18:26.975744 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 16:18:26.985428 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 16:18:26.985465 kernel: device-mapper: uevent: version 1.0.3 Jun 25 16:18:26.985475 kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Jun 25 16:18:26.985696 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:18:26.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:26.987191 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:18:26.991042 kernel: audit: type=1130 audit(1719332306.986:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:26.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:26.992747 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 16:18:26.995298 kernel: audit: type=1130 audit(1719332306.990:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:26.995322 kernel: audit: type=1334 audit(1719332306.991:8): prog-id=6 op=LOAD Jun 25 16:18:26.991000 audit: BPF prog-id=6 op=LOAD Jun 25 16:18:27.000057 systemd-modules-load[196]: Inserted module 'dm_multipath' Jun 25 16:18:27.001951 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 16:18:27.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:27.007856 kernel: audit: type=1130 audit(1719332307.003:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:27.012022 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 16:18:27.015375 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 16:18:27.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:27.019054 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 16:18:27.022995 kernel: audit: type=1130 audit(1719332307.017:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:27.023365 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:18:27.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:27.031091 systemd-resolved[209]: Positive Trust Anchors: Jun 25 16:18:27.031112 systemd-resolved[209]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 16:18:27.034028 dracut-cmdline[220]: dracut-dracut-053 Jun 25 16:18:27.031153 systemd-resolved[209]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 16:18:27.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:27.033904 systemd-resolved[209]: Defaulting to hostname 'linux'. Jun 25 16:18:27.034882 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 16:18:27.036216 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:18:27.046775 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=05dd62847a393595c8cf7409b58afa2d4045a2186c3cd58722296be6f3bc4fa9 Jun 25 16:18:27.107881 kernel: Loading iSCSI transport class v2.0-870. Jun 25 16:18:27.118863 kernel: iscsi: registered transport (tcp) Jun 25 16:18:27.147309 kernel: iscsi: registered transport (qla4xxx) Jun 25 16:18:27.147404 kernel: QLogic iSCSI HBA Driver Jun 25 16:18:27.180047 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 16:18:27.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:27.185003 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 16:18:27.256890 kernel: raid6: avx2x4 gen() 30123 MB/s Jun 25 16:18:27.273867 kernel: raid6: avx2x2 gen() 31445 MB/s Jun 25 16:18:27.303898 kernel: raid6: avx2x1 gen() 20917 MB/s Jun 25 16:18:27.303977 kernel: raid6: using algorithm avx2x2 gen() 31445 MB/s Jun 25 16:18:27.323890 kernel: raid6: .... xor() 16108 MB/s, rmw enabled Jun 25 16:18:27.323985 kernel: raid6: using avx2x2 recovery algorithm Jun 25 16:18:27.328880 kernel: xor: automatically using best checksumming function avx Jun 25 16:18:27.481890 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jun 25 16:18:27.491949 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 16:18:27.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:27.492000 audit: BPF prog-id=7 op=LOAD Jun 25 16:18:27.492000 audit: BPF prog-id=8 op=LOAD Jun 25 16:18:27.504090 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:18:27.515707 systemd-udevd[397]: Using default interface naming scheme 'v252'. Jun 25 16:18:27.520916 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:18:27.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:27.522212 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 16:18:27.533286 dracut-pre-trigger[399]: rd.md=0: removing MD RAID activation Jun 25 16:18:27.559408 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 16:18:27.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:27.568978 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 16:18:27.603628 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:18:27.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:27.632885 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jun 25 16:18:27.658164 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jun 25 16:18:27.658299 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 16:18:27.658311 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 25 16:18:27.658323 kernel: GPT:9289727 != 19775487 Jun 25 16:18:27.658333 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 25 16:18:27.658344 kernel: GPT:9289727 != 19775487 Jun 25 16:18:27.658357 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 16:18:27.658367 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 16:18:27.658381 kernel: AVX2 version of gcm_enc/dec engaged. Jun 25 16:18:27.658392 kernel: AES CTR mode by8 optimization enabled Jun 25 16:18:27.675856 kernel: libata version 3.00 loaded. Jun 25 16:18:27.677879 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (452) Jun 25 16:18:27.678615 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 25 16:18:27.719851 kernel: ata_piix 0000:00:01.1: version 2.13 Jun 25 16:18:27.720056 kernel: BTRFS: device fsid dda7891e-deba-495b-b677-4df6bea75326 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (454) Jun 25 16:18:27.720080 kernel: scsi host0: ata_piix Jun 25 16:18:27.720206 kernel: scsi host1: ata_piix Jun 25 16:18:27.720315 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Jun 25 16:18:27.720328 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Jun 25 16:18:27.726260 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 25 16:18:27.730602 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 16:18:27.734382 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 25 16:18:27.735714 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 25 16:18:27.757015 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 16:18:27.765419 disk-uuid[532]: Primary Header is updated. Jun 25 16:18:27.765419 disk-uuid[532]: Secondary Entries is updated. Jun 25 16:18:27.765419 disk-uuid[532]: Secondary Header is updated. Jun 25 16:18:27.769011 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 16:18:27.771860 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 16:18:27.774862 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 16:18:27.848863 kernel: ata2: found unknown device (class 0) Jun 25 16:18:27.850849 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jun 25 16:18:27.851906 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jun 25 16:18:27.915880 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jun 25 16:18:27.940091 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 25 16:18:27.940106 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jun 25 16:18:28.773509 disk-uuid[533]: The operation has completed successfully. Jun 25 16:18:28.775005 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 25 16:18:28.797025 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 16:18:28.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:28.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:28.797114 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 16:18:28.814021 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 16:18:28.818602 sh[551]: Success Jun 25 16:18:28.831868 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jun 25 16:18:28.857130 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 16:18:28.874127 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 16:18:28.876861 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 16:18:28.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:28.895488 kernel: BTRFS info (device dm-0): first mount of filesystem dda7891e-deba-495b-b677-4df6bea75326 Jun 25 16:18:28.895550 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:18:28.895563 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 16:18:28.896548 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 16:18:28.897329 kernel: BTRFS info (device dm-0): using free space tree Jun 25 16:18:28.902301 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 16:18:28.903547 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 16:18:28.911976 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 16:18:28.913796 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 16:18:28.924964 kernel: BTRFS info (device vda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:18:28.925018 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:18:28.925035 kernel: BTRFS info (device vda6): using free space tree Jun 25 16:18:28.931922 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 16:18:28.933701 kernel: BTRFS info (device vda6): last unmount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:18:28.941208 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 16:18:28.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:28.947020 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 16:18:29.046554 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 16:18:29.047652 ignition[665]: Ignition 2.15.0 Jun 25 16:18:29.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:29.047663 ignition[665]: Stage: fetch-offline Jun 25 16:18:29.049000 audit: BPF prog-id=9 op=LOAD Jun 25 16:18:29.047722 ignition[665]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:18:29.047732 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 16:18:29.047853 ignition[665]: parsed url from cmdline: "" Jun 25 16:18:29.047857 ignition[665]: no config URL provided Jun 25 16:18:29.047862 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 16:18:29.047868 ignition[665]: no config at "/usr/lib/ignition/user.ign" Jun 25 16:18:29.055067 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 16:18:29.047891 ignition[665]: op(1): [started] loading QEMU firmware config module Jun 25 16:18:29.047896 ignition[665]: op(1): executing: "modprobe" "qemu_fw_cfg" Jun 25 16:18:29.057602 ignition[665]: op(1): [finished] loading QEMU firmware config module Jun 25 16:18:29.076863 systemd-networkd[741]: lo: Link UP Jun 25 16:18:29.076871 systemd-networkd[741]: lo: Gained carrier Jun 25 16:18:29.089461 systemd-networkd[741]: Enumeration completed Jun 25 16:18:29.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:29.089710 systemd-networkd[741]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:18:29.089713 systemd-networkd[741]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 16:18:29.090817 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 16:18:29.091028 systemd-networkd[741]: eth0: Link UP Jun 25 16:18:29.091031 systemd-networkd[741]: eth0: Gained carrier Jun 25 16:18:29.091036 systemd-networkd[741]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:18:29.092179 systemd[1]: Reached target network.target - Network. Jun 25 16:18:29.098591 systemd[1]: Starting iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 16:18:29.104912 systemd-networkd[741]: eth0: DHCPv4 address 10.0.0.61/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 25 16:18:29.113316 ignition[665]: parsing config with SHA512: a98e94f0ae2a608c9773f528402355cc4f7b479036a0d0328f1e779deec223d479a2e94c0b582b9f782ccadcff25ddd605d1aab9b321533f83a5005529ac95bb Jun 25 16:18:29.116930 unknown[665]: fetched base config from "system" Jun 25 16:18:29.116941 unknown[665]: fetched user config from "qemu" Jun 25 16:18:29.117453 ignition[665]: fetch-offline: fetch-offline passed Jun 25 16:18:29.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:29.118612 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 16:18:29.117525 ignition[665]: Ignition finished successfully Jun 25 16:18:29.120182 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jun 25 16:18:29.132988 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 16:18:29.134082 systemd[1]: Started iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 16:18:29.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:29.137104 systemd[1]: Starting iscsid.service - Open-iSCSI... Jun 25 16:18:29.140956 iscsid[747]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jun 25 16:18:29.140956 iscsid[747]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jun 25 16:18:29.140956 iscsid[747]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jun 25 16:18:29.140956 iscsid[747]: If using hardware iscsi like qla4xxx this message can be ignored. Jun 25 16:18:29.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:29.153731 iscsid[747]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jun 25 16:18:29.153731 iscsid[747]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jun 25 16:18:29.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:29.142509 systemd[1]: Started iscsid.service - Open-iSCSI. Jun 25 16:18:29.144325 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 16:18:29.157009 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 16:18:29.158469 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 16:18:29.159768 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:18:29.161124 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 16:18:29.163649 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 16:18:29.174806 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 16:18:29.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:29.207346 ignition[745]: Ignition 2.15.0 Jun 25 16:18:29.207356 ignition[745]: Stage: kargs Jun 25 16:18:29.207458 ignition[745]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:18:29.207466 ignition[745]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 16:18:29.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:29.210148 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 16:18:29.208314 ignition[745]: kargs: kargs passed Jun 25 16:18:29.208354 ignition[745]: Ignition finished successfully Jun 25 16:18:29.217978 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 16:18:29.227462 ignition[767]: Ignition 2.15.0 Jun 25 16:18:29.227635 ignition[767]: Stage: disks Jun 25 16:18:29.227733 ignition[767]: no configs at "/usr/lib/ignition/base.d" Jun 25 16:18:29.227746 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 16:18:29.231228 ignition[767]: disks: disks passed Jun 25 16:18:29.231292 ignition[767]: Ignition finished successfully Jun 25 16:18:29.233380 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 16:18:29.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:29.233527 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 16:18:29.236538 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:18:29.238580 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 16:18:29.240634 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 16:18:29.242498 systemd[1]: Reached target basic.target - Basic System. Jun 25 16:18:29.254950 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 16:18:29.266513 systemd-fsck[776]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 25 16:18:29.369522 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 16:18:29.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:29.377928 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 16:18:29.453870 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Quota mode: none. Jun 25 16:18:29.454472 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 16:18:29.456377 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 16:18:29.468931 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 16:18:29.471547 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 16:18:29.473979 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 16:18:29.476695 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (782) Jun 25 16:18:29.474031 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 16:18:29.483094 kernel: BTRFS info (device vda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:18:29.483110 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:18:29.483122 kernel: BTRFS info (device vda6): using free space tree Jun 25 16:18:29.474053 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 16:18:29.485392 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 16:18:29.487433 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 16:18:29.502975 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 16:18:29.528701 initrd-setup-root[806]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 16:18:29.532752 initrd-setup-root[813]: cut: /sysroot/etc/group: No such file or directory Jun 25 16:18:29.536679 initrd-setup-root[820]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 16:18:29.539434 initrd-setup-root[827]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 16:18:29.602035 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 16:18:29.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:29.608974 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 16:18:29.612020 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 16:18:29.615858 kernel: BTRFS info (device vda6): last unmount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:18:29.629190 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 16:18:29.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:29.707592 ignition[896]: INFO : Ignition 2.15.0 Jun 25 16:18:29.707592 ignition[896]: INFO : Stage: mount Jun 25 16:18:29.727737 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:18:29.727737 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 16:18:29.730546 ignition[896]: INFO : mount: mount passed Jun 25 16:18:29.731421 ignition[896]: INFO : Ignition finished successfully Jun 25 16:18:29.732867 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 16:18:29.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:29.746962 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 16:18:29.903282 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 16:18:29.916169 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 16:18:29.923919 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (905) Jun 25 16:18:29.923950 kernel: BTRFS info (device vda6): first mount of filesystem 86bb1873-22f4-4b9b-84d4-c8e8b30f7c8f Jun 25 16:18:29.923963 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 25 16:18:29.925852 kernel: BTRFS info (device vda6): using free space tree Jun 25 16:18:29.929062 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 16:18:29.947311 ignition[923]: INFO : Ignition 2.15.0 Jun 25 16:18:29.947311 ignition[923]: INFO : Stage: files Jun 25 16:18:29.949436 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:18:29.949436 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 16:18:29.949436 ignition[923]: DEBUG : files: compiled without relabeling support, skipping Jun 25 16:18:29.949436 ignition[923]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 16:18:29.949436 ignition[923]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 16:18:29.957055 ignition[923]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 16:18:29.957055 ignition[923]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 16:18:29.957055 ignition[923]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 16:18:29.957055 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 16:18:29.957055 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 16:18:29.951808 unknown[923]: wrote ssh authorized keys file for user: core Jun 25 16:18:29.985923 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 16:18:30.049368 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 16:18:30.049368 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 25 16:18:30.053860 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 16:18:30.053860 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 16:18:30.053860 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 16:18:30.053860 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 16:18:30.053860 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 16:18:30.053860 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 16:18:30.053860 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 16:18:30.053860 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 16:18:30.053860 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 16:18:30.053860 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:18:30.053860 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:18:30.053860 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:18:30.053860 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jun 25 16:18:30.413550 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 25 16:18:30.528039 systemd-networkd[741]: eth0: Gained IPv6LL Jun 25 16:18:30.898798 ignition[923]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 16:18:30.898798 ignition[923]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 25 16:18:30.902781 ignition[923]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 16:18:30.904983 ignition[923]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 16:18:30.904983 ignition[923]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 25 16:18:30.904983 ignition[923]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jun 25 16:18:30.904983 ignition[923]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 16:18:30.904983 ignition[923]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 25 16:18:30.904983 ignition[923]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jun 25 16:18:30.904983 ignition[923]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jun 25 16:18:30.904983 ignition[923]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 16:18:30.929933 ignition[923]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jun 25 16:18:30.931586 ignition[923]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jun 25 16:18:30.931586 ignition[923]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jun 25 16:18:30.934410 ignition[923]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 16:18:30.935855 ignition[923]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 16:18:30.937713 ignition[923]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 16:18:30.937713 ignition[923]: INFO : files: files passed Jun 25 16:18:30.941549 ignition[923]: INFO : Ignition finished successfully Jun 25 16:18:30.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:30.940026 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 16:18:30.951085 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 16:18:30.953710 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 16:18:30.955264 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 16:18:30.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:30.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:30.955357 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 16:18:30.959757 initrd-setup-root-after-ignition[948]: grep: /sysroot/oem/oem-release: No such file or directory Jun 25 16:18:30.961215 initrd-setup-root-after-ignition[950]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:18:30.961215 initrd-setup-root-after-ignition[950]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:18:30.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:30.967716 initrd-setup-root-after-ignition[954]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 16:18:30.962000 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 16:18:30.964999 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 16:18:30.981026 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 16:18:30.994083 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 16:18:30.994172 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 16:18:30.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:30.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:30.996324 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 16:18:30.998655 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 16:18:31.000715 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 16:18:31.001870 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 16:18:31.012723 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 16:18:31.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.018357 kernel: kauditd_printk_skb: 33 callbacks suppressed Jun 25 16:18:31.018386 kernel: audit: type=1130 audit(1719332311.014:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.031069 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 16:18:31.039323 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:18:31.081462 kernel: audit: type=1131 audit(1719332311.039:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.081486 kernel: audit: type=1131 audit(1719332311.044:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.081496 kernel: audit: type=1131 audit(1719332311.047:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.081504 kernel: audit: type=1131 audit(1719332311.050:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.081514 kernel: audit: type=1131 audit(1719332311.059:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.081523 kernel: audit: type=1131 audit(1719332311.062:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.039474 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:18:31.083917 iscsid[747]: iscsid shutting down. Jun 25 16:18:31.039660 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 16:18:31.089066 kernel: audit: type=1131 audit(1719332311.084:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.089149 ignition[968]: INFO : Ignition 2.15.0 Jun 25 16:18:31.089149 ignition[968]: INFO : Stage: umount Jun 25 16:18:31.089149 ignition[968]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 16:18:31.089149 ignition[968]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 25 16:18:31.089149 ignition[968]: INFO : umount: umount passed Jun 25 16:18:31.089149 ignition[968]: INFO : Ignition finished successfully Jun 25 16:18:31.101209 kernel: audit: type=1131 audit(1719332311.089:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.101240 kernel: audit: type=1131 audit(1719332311.097:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.039813 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 16:18:31.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.039950 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 16:18:31.040260 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 16:18:31.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.043272 systemd[1]: Stopped target basic.target - Basic System. Jun 25 16:18:31.043421 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 16:18:31.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.043587 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 16:18:31.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.043749 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 16:18:31.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.044101 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 16:18:31.044266 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 16:18:31.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.044441 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 16:18:31.044607 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 16:18:31.044808 systemd[1]: Stopped target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:18:31.045120 systemd[1]: Stopped target swap.target - Swaps. Jun 25 16:18:31.045256 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 16:18:31.045352 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 16:18:31.045543 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:18:31.048552 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 16:18:31.048639 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 16:18:31.048764 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 16:18:31.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.048860 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 16:18:31.051773 systemd[1]: Stopped target paths.target - Path Units. Jun 25 16:18:31.055085 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 16:18:31.058903 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:18:31.059347 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 16:18:31.059453 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 16:18:31.059640 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 16:18:31.059750 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 16:18:31.060171 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 16:18:31.060256 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 16:18:31.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.064278 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 16:18:31.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.066721 systemd[1]: Stopping iscsid.service - Open-iSCSI... Jun 25 16:18:31.080499 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 16:18:31.083791 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 16:18:31.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.084014 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:18:31.163000 audit: BPF prog-id=6 op=UNLOAD Jun 25 16:18:31.085865 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 16:18:31.085954 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 16:18:31.092317 systemd[1]: iscsid.service: Deactivated successfully. Jun 25 16:18:31.092432 systemd[1]: Stopped iscsid.service - Open-iSCSI. Jun 25 16:18:31.099190 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 16:18:31.099761 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 16:18:31.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.099847 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 16:18:31.103583 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 16:18:31.103660 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 16:18:31.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.106772 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 16:18:31.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.106803 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 16:18:31.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.108075 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 16:18:31.108114 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 16:18:31.110023 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 16:18:31.110056 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 16:18:31.112261 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 16:18:31.112294 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 16:18:31.114518 systemd[1]: Stopping iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 16:18:31.116726 systemd[1]: iscsiuio.service: Deactivated successfully. Jun 25 16:18:31.116884 systemd[1]: Stopped iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 16:18:31.117806 systemd[1]: Stopped target network.target - Network. Jun 25 16:18:31.120692 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 16:18:31.120734 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 16:18:31.121781 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 16:18:31.123684 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 16:18:31.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.129863 systemd-networkd[741]: eth0: DHCPv6 lease lost Jun 25 16:18:31.194000 audit: BPF prog-id=9 op=UNLOAD Jun 25 16:18:31.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.131665 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 16:18:31.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.131756 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 16:18:31.133062 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 16:18:31.133115 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:18:31.147139 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 16:18:31.148599 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 16:18:31.148646 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 16:18:31.149844 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 16:18:31.149882 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:18:31.151967 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 16:18:31.152003 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 16:18:31.152147 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:18:31.153249 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 25 16:18:31.153708 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 16:18:31.153798 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 16:18:31.154865 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 16:18:31.154955 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 16:18:31.155793 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 16:18:31.155863 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 16:18:31.155983 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 16:18:31.156019 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:18:31.159618 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 25 16:18:31.160025 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 16:18:31.160097 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 16:18:31.168169 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 16:18:31.168299 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:18:31.169141 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 16:18:31.169180 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 16:18:31.171896 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 16:18:31.171926 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:18:31.173855 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 16:18:31.173906 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 16:18:31.174274 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 16:18:31.174307 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 16:18:31.177866 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 16:18:31.177904 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 16:18:31.190662 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 16:18:31.192471 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 16:18:31.192535 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:18:31.195649 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 16:18:31.195692 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:18:31.196847 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 16:18:31.196887 systemd[1]: Stopped systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 16:18:31.199597 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 25 16:18:31.247301 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 16:18:31.247411 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 16:18:31.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.248641 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 16:18:31.264048 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 16:18:31.271116 systemd[1]: Switching root. Jun 25 16:18:31.291687 systemd-journald[194]: Journal stopped Jun 25 16:18:32.080803 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jun 25 16:18:32.080873 kernel: SELinux: Permission cmd in class io_uring not defined in policy. Jun 25 16:18:32.080886 kernel: SELinux: the above unknown classes and permissions will be allowed Jun 25 16:18:32.080897 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 16:18:32.080919 kernel: SELinux: policy capability open_perms=1 Jun 25 16:18:32.080928 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 16:18:32.080937 kernel: SELinux: policy capability always_check_network=0 Jun 25 16:18:32.080946 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 16:18:32.080955 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 16:18:32.080967 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 16:18:32.080975 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 16:18:32.080984 systemd[1]: Successfully loaded SELinux policy in 37.376ms. Jun 25 16:18:32.081017 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.972ms. Jun 25 16:18:32.081028 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 16:18:32.081037 systemd[1]: Detected virtualization kvm. Jun 25 16:18:32.081048 systemd[1]: Detected architecture x86-64. Jun 25 16:18:32.081057 systemd[1]: Detected first boot. Jun 25 16:18:32.081066 systemd[1]: Initializing machine ID from VM UUID. Jun 25 16:18:32.081075 systemd[1]: Populated /etc with preset unit settings. Jun 25 16:18:32.081085 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 16:18:32.081097 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 16:18:32.081106 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 16:18:32.081116 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 16:18:32.081125 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 16:18:32.081135 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 16:18:32.081144 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 16:18:32.081154 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 16:18:32.081164 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 16:18:32.081178 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 16:18:32.081188 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 16:18:32.081198 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 16:18:32.081208 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 16:18:32.081217 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 16:18:32.081227 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 16:18:32.081236 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 16:18:32.081246 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 16:18:32.081258 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 16:18:32.081267 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 16:18:32.081277 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 16:18:32.081286 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 16:18:32.081295 systemd[1]: Reached target slices.target - Slice Units. Jun 25 16:18:32.081305 systemd[1]: Reached target swap.target - Swaps. Jun 25 16:18:32.081314 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 16:18:32.081323 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 16:18:32.081336 systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. Jun 25 16:18:32.081346 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 16:18:32.081355 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 16:18:32.081365 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 16:18:32.081374 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 16:18:32.081386 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 16:18:32.081396 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 16:18:32.081405 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 16:18:32.081415 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:18:32.081426 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 16:18:32.081435 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 16:18:32.081445 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 16:18:32.081456 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 16:18:32.081468 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:18:32.081480 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 16:18:32.081492 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 16:18:32.081503 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:18:32.081516 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 16:18:32.081528 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:18:32.081540 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 16:18:32.081551 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:18:32.081581 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 16:18:32.081599 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 16:18:32.081616 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 16:18:32.081632 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 16:18:32.081648 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 16:18:32.081667 systemd[1]: Stopped systemd-journald.service - Journal Service. Jun 25 16:18:32.081678 kernel: fuse: init (API version 7.37) Jun 25 16:18:32.081687 kernel: loop: module loaded Jun 25 16:18:32.081696 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 16:18:32.081705 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 16:18:32.081715 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 16:18:32.081728 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 16:18:32.081738 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 16:18:32.081748 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 16:18:32.081759 systemd[1]: Stopped verity-setup.service. Jun 25 16:18:32.081771 systemd-journald[1075]: Journal started Jun 25 16:18:32.081809 systemd-journald[1075]: Runtime Journal (/run/log/journal/fa08475784ee45ccbac9bcd3101670cb) is 6.0M, max 48.4M, 42.3M free. Jun 25 16:18:31.348000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 16:18:31.555000 audit: BPF prog-id=10 op=LOAD Jun 25 16:18:31.555000 audit: BPF prog-id=10 op=UNLOAD Jun 25 16:18:31.555000 audit: BPF prog-id=11 op=LOAD Jun 25 16:18:31.555000 audit: BPF prog-id=11 op=UNLOAD Jun 25 16:18:31.907000 audit: BPF prog-id=12 op=LOAD Jun 25 16:18:31.907000 audit: BPF prog-id=3 op=UNLOAD Jun 25 16:18:31.907000 audit: BPF prog-id=13 op=LOAD Jun 25 16:18:31.907000 audit: BPF prog-id=14 op=LOAD Jun 25 16:18:31.907000 audit: BPF prog-id=4 op=UNLOAD Jun 25 16:18:31.907000 audit: BPF prog-id=5 op=UNLOAD Jun 25 16:18:31.907000 audit: BPF prog-id=15 op=LOAD Jun 25 16:18:31.907000 audit: BPF prog-id=12 op=UNLOAD Jun 25 16:18:31.908000 audit: BPF prog-id=16 op=LOAD Jun 25 16:18:31.908000 audit: BPF prog-id=17 op=LOAD Jun 25 16:18:31.908000 audit: BPF prog-id=13 op=UNLOAD Jun 25 16:18:31.908000 audit: BPF prog-id=14 op=UNLOAD Jun 25 16:18:31.908000 audit: BPF prog-id=18 op=LOAD Jun 25 16:18:31.908000 audit: BPF prog-id=15 op=UNLOAD Jun 25 16:18:31.908000 audit: BPF prog-id=19 op=LOAD Jun 25 16:18:31.908000 audit: BPF prog-id=20 op=LOAD Jun 25 16:18:31.908000 audit: BPF prog-id=16 op=UNLOAD Jun 25 16:18:31.908000 audit: BPF prog-id=17 op=UNLOAD Jun 25 16:18:31.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.920000 audit: BPF prog-id=18 op=UNLOAD Jun 25 16:18:32.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:32.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:32.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:32.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:32.057000 audit: BPF prog-id=21 op=LOAD Jun 25 16:18:32.057000 audit: BPF prog-id=22 op=LOAD Jun 25 16:18:32.058000 audit: BPF prog-id=23 op=LOAD Jun 25 16:18:32.058000 audit: BPF prog-id=19 op=UNLOAD Jun 25 16:18:32.058000 audit: BPF prog-id=20 op=UNLOAD Jun 25 16:18:32.078000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jun 25 16:18:32.078000 audit[1075]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffda46deae0 a2=4000 a3=7ffda46deb7c items=0 ppid=1 pid=1075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:32.078000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jun 25 16:18:31.887662 systemd[1]: Queued start job for default target multi-user.target. Jun 25 16:18:32.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:31.887672 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 25 16:18:31.909765 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 16:18:32.084973 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:18:32.088100 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 16:18:32.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:32.088761 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 16:18:32.090157 kernel: ACPI: bus type drm_connector registered Jun 25 16:18:32.090500 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 16:18:32.091782 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 16:18:32.092896 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 16:18:32.094078 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 16:18:32.095267 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 16:18:32.096523 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 16:18:32.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:32.098007 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 16:18:32.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:32.099513 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 16:18:32.099633 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 16:18:32.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:32.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:32.100991 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:18:32.101101 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:18:32.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:32.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:32.102437 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 16:18:32.102553 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 16:18:32.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:32.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:32.103950 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:18:32.104062 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:18:32.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:32.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:32.105594 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 16:18:32.105708 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 16:18:32.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:32.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:32.107014 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:18:32.107128 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:18:32.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:32.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:32.108479 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 16:18:32.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:32.109807 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 16:18:32.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:32.111168 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 16:18:32.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:32.112923 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 16:18:32.131025 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 16:18:32.133626 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 16:18:32.134829 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 16:18:32.136480 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 16:18:32.138858 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 16:18:32.140563 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:18:32.142335 systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed... Jun 25 16:18:32.143738 systemd-journald[1075]: Time spent on flushing to /var/log/journal/fa08475784ee45ccbac9bcd3101670cb is 21.061ms for 1079 entries. Jun 25 16:18:32.143738 systemd-journald[1075]: System Journal (/var/log/journal/fa08475784ee45ccbac9bcd3101670cb) is 8.0M, max 195.6M, 187.6M free. Jun 25 16:18:32.171579 systemd-journald[1075]: Received client request to flush runtime journal. Jun 25 16:18:32.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:32.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:32.143827 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:18:32.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:32.145473 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 16:18:32.148264 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 16:18:32.153167 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 16:18:32.154697 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 16:18:32.156206 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 16:18:32.157867 systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed. Jun 25 16:18:32.159754 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 16:18:32.170058 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 16:18:32.171689 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 16:18:32.173243 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 16:18:32.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:32.174876 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 16:18:32.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:32.177646 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 16:18:32.185010 udevadm[1101]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 25 16:18:32.192545 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 16:18:32.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:32.619254 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 16:18:32.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:32.620000 audit: BPF prog-id=24 op=LOAD Jun 25 16:18:32.620000 audit: BPF prog-id=25 op=LOAD Jun 25 16:18:32.620000 audit: BPF prog-id=7 op=UNLOAD Jun 25 16:18:32.620000 audit: BPF prog-id=8 op=UNLOAD Jun 25 16:18:32.635009 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 16:18:32.650329 systemd-udevd[1105]: Using default interface naming scheme 'v252'. Jun 25 16:18:32.663622 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 16:18:32.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:32.665000 audit: BPF prog-id=26 op=LOAD Jun 25 16:18:32.674001 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 16:18:32.677000 audit: BPF prog-id=27 op=LOAD Jun 25 16:18:32.677000 audit: BPF prog-id=28 op=LOAD Jun 25 16:18:32.677000 audit: BPF prog-id=29 op=LOAD Jun 25 16:18:32.679575 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 16:18:32.682702 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 25 16:18:32.702859 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1109) Jun 25 16:18:32.705476 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1112) Jun 25 16:18:32.709459 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 16:18:32.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:32.717884 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jun 25 16:18:32.737874 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 25 16:18:32.738867 kernel: ACPI: button: Power Button [PWRF] Jun 25 16:18:32.750878 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jun 25 16:18:32.764868 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jun 25 16:18:32.770498 systemd-networkd[1115]: lo: Link UP Jun 25 16:18:32.770512 systemd-networkd[1115]: lo: Gained carrier Jun 25 16:18:32.771010 systemd-networkd[1115]: Enumeration completed Jun 25 16:18:32.771106 systemd-networkd[1115]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:18:32.771116 systemd-networkd[1115]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 16:18:32.771131 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 16:18:32.772164 systemd-networkd[1115]: eth0: Link UP Jun 25 16:18:32.772175 systemd-networkd[1115]: eth0: Gained carrier Jun 25 16:18:32.772185 systemd-networkd[1115]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 16:18:32.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:32.777986 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 16:18:32.782012 systemd-networkd[1115]: eth0: DHCPv4 address 10.0.0.61/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 25 16:18:32.785854 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 16:18:32.857109 kernel: SVM: TSC scaling supported Jun 25 16:18:32.857208 kernel: kvm: Nested Virtualization enabled Jun 25 16:18:32.857225 kernel: SVM: kvm: Nested Paging enabled Jun 25 16:18:32.857242 kernel: SVM: Virtual VMLOAD VMSAVE supported Jun 25 16:18:32.858043 kernel: SVM: Virtual GIF supported Jun 25 16:18:32.858065 kernel: SVM: LBR virtualization supported Jun 25 16:18:32.873868 kernel: EDAC MC: Ver: 3.0.0 Jun 25 16:18:32.906402 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 16:18:32.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:32.919997 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 16:18:32.927529 lvm[1142]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 16:18:32.962093 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 16:18:32.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:32.963434 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 16:18:32.976172 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 16:18:32.982422 lvm[1143]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 16:18:33.009109 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 16:18:33.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:33.010434 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 16:18:33.011601 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 16:18:33.011620 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 16:18:33.012698 systemd[1]: Reached target machines.target - Containers. Jun 25 16:18:33.025108 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 16:18:33.026826 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:18:33.026955 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:18:33.028727 systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update... Jun 25 16:18:33.031461 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 16:18:33.034053 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 16:18:33.037455 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 16:18:33.039155 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1145 (bootctl) Jun 25 16:18:33.041263 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM... Jun 25 16:18:33.048792 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 16:18:33.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:33.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:33.098297 systemd-fsck[1152]: fsck.fat 4.2 (2021-01-31) Jun 25 16:18:33.098297 systemd-fsck[1152]: /dev/vda1: 808 files, 120378/258078 clusters Jun 25 16:18:33.088807 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM. Jun 25 16:18:33.098122 systemd[1]: Mounting boot.mount - Boot partition... Jun 25 16:18:33.100853 kernel: loop0: detected capacity change from 0 to 80584 Jun 25 16:18:33.254809 systemd[1]: Mounted boot.mount - Boot partition. Jun 25 16:18:33.268259 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 16:18:33.268907 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 16:18:33.268968 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 16:18:33.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:33.275207 systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update. Jun 25 16:18:33.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:33.294958 kernel: loop1: detected capacity change from 0 to 209816 Jun 25 16:18:33.332867 kernel: loop2: detected capacity change from 0 to 139360 Jun 25 16:18:33.359892 kernel: loop3: detected capacity change from 0 to 80584 Jun 25 16:18:33.365882 kernel: loop4: detected capacity change from 0 to 209816 Jun 25 16:18:33.373851 kernel: loop5: detected capacity change from 0 to 139360 Jun 25 16:18:33.383002 (sd-sysext)[1158]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jun 25 16:18:33.383514 (sd-sysext)[1158]: Merged extensions into '/usr'. Jun 25 16:18:33.385180 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 16:18:33.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:33.394499 ldconfig[1144]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 16:18:33.396355 systemd[1]: Starting ensure-sysext.service... Jun 25 16:18:33.398648 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 16:18:33.400087 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 16:18:33.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:33.408273 systemd-tmpfiles[1160]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jun 25 16:18:33.409492 systemd-tmpfiles[1160]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 16:18:33.409970 systemd-tmpfiles[1160]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 16:18:33.410981 systemd-tmpfiles[1160]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 16:18:33.411622 systemd[1]: Reloading. Jun 25 16:18:33.541620 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:18:33.605000 audit: BPF prog-id=30 op=LOAD Jun 25 16:18:33.605000 audit: BPF prog-id=26 op=UNLOAD Jun 25 16:18:33.605000 audit: BPF prog-id=31 op=LOAD Jun 25 16:18:33.605000 audit: BPF prog-id=27 op=UNLOAD Jun 25 16:18:33.605000 audit: BPF prog-id=32 op=LOAD Jun 25 16:18:33.605000 audit: BPF prog-id=33 op=LOAD Jun 25 16:18:33.605000 audit: BPF prog-id=28 op=UNLOAD Jun 25 16:18:33.605000 audit: BPF prog-id=29 op=UNLOAD Jun 25 16:18:33.606000 audit: BPF prog-id=34 op=LOAD Jun 25 16:18:33.606000 audit: BPF prog-id=35 op=LOAD Jun 25 16:18:33.606000 audit: BPF prog-id=24 op=UNLOAD Jun 25 16:18:33.606000 audit: BPF prog-id=25 op=UNLOAD Jun 25 16:18:33.608000 audit: BPF prog-id=36 op=LOAD Jun 25 16:18:33.608000 audit: BPF prog-id=21 op=UNLOAD Jun 25 16:18:33.608000 audit: BPF prog-id=37 op=LOAD Jun 25 16:18:33.608000 audit: BPF prog-id=38 op=LOAD Jun 25 16:18:33.608000 audit: BPF prog-id=22 op=UNLOAD Jun 25 16:18:33.608000 audit: BPF prog-id=23 op=UNLOAD Jun 25 16:18:33.611905 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 16:18:33.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:33.617401 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 16:18:33.620307 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 16:18:33.622830 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 16:18:33.624000 audit: BPF prog-id=39 op=LOAD Jun 25 16:18:33.627968 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 16:18:33.636000 audit: BPF prog-id=40 op=LOAD Jun 25 16:18:33.638848 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 16:18:33.641506 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 16:18:33.648000 audit[1232]: SYSTEM_BOOT pid=1232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jun 25 16:18:33.651632 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:18:33.651899 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:18:33.653616 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:18:33.656000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jun 25 16:18:33.656000 audit[1238]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff17082090 a2=420 a3=0 items=0 ppid=1217 pid=1238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:33.656000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jun 25 16:18:33.656509 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:18:33.660014 augenrules[1238]: No rules Jun 25 16:18:33.660490 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:18:33.661703 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:18:33.661876 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:18:33.662010 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:18:33.663668 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 16:18:33.665469 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 16:18:33.667275 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:18:33.667398 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:18:33.675602 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:18:33.675962 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:18:33.677701 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:18:33.677936 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:18:33.680853 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 16:18:33.684039 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:18:33.686259 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:18:34.183814 systemd-timesyncd[1229]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jun 25 16:18:34.183857 systemd-timesyncd[1229]: Initial clock synchronization to Tue 2024-06-25 16:18:34.183742 UTC. Jun 25 16:18:34.184101 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:18:34.187477 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:18:34.189886 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:18:34.191176 systemd-resolved[1226]: Positive Trust Anchors: Jun 25 16:18:34.191188 systemd-resolved[1226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 16:18:34.191268 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:18:34.191283 systemd-resolved[1226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 16:18:34.191392 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:18:34.192979 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 16:18:34.194067 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:18:34.194394 systemd-resolved[1226]: Defaulting to hostname 'linux'. Jun 25 16:18:34.194875 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 16:18:34.196665 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 16:18:34.198215 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 16:18:34.199703 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:18:34.199826 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:18:34.201299 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:18:34.201413 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:18:34.202846 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:18:34.202955 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:18:34.204416 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 16:18:34.207843 systemd[1]: Reached target network.target - Network. Jun 25 16:18:34.209063 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 16:18:34.210324 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 16:18:34.211422 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:18:34.211646 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 16:18:34.230033 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 16:18:34.233297 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 16:18:34.236257 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 16:18:34.238792 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 16:18:34.240198 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 16:18:34.240374 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:18:34.240516 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 16:18:34.240608 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 16:18:34.241765 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 16:18:34.241915 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 16:18:34.243500 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 16:18:34.243623 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 16:18:34.246195 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 16:18:34.246373 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 16:18:34.247999 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 16:18:34.248137 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 16:18:34.249767 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 16:18:34.249919 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 16:18:34.251264 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 16:18:34.252536 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 16:18:34.253907 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 16:18:34.255258 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 16:18:34.256570 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 16:18:34.257747 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 16:18:34.257784 systemd[1]: Reached target paths.target - Path Units. Jun 25 16:18:34.258791 systemd[1]: Reached target timers.target - Timer Units. Jun 25 16:18:34.260368 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 16:18:34.262933 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 16:18:34.271432 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 16:18:34.272680 systemd[1]: systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:18:34.272742 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 16:18:34.273358 systemd[1]: Finished ensure-sysext.service. Jun 25 16:18:34.274430 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 16:18:34.276508 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 16:18:34.277591 systemd[1]: Reached target basic.target - Basic System. Jun 25 16:18:34.278591 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 16:18:34.278613 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 16:18:34.279707 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 16:18:34.282170 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 16:18:34.284487 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 16:18:34.288069 jq[1259]: false Jun 25 16:18:34.286778 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 16:18:34.288077 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 16:18:34.289431 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 16:18:34.291873 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 16:18:34.294339 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 16:18:34.296902 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 16:18:34.300029 extend-filesystems[1260]: Found loop3 Jun 25 16:18:34.300029 extend-filesystems[1260]: Found loop4 Jun 25 16:18:34.300029 extend-filesystems[1260]: Found loop5 Jun 25 16:18:34.300029 extend-filesystems[1260]: Found sr0 Jun 25 16:18:34.300029 extend-filesystems[1260]: Found vda Jun 25 16:18:34.300029 extend-filesystems[1260]: Found vda1 Jun 25 16:18:34.300029 extend-filesystems[1260]: Found vda2 Jun 25 16:18:34.300029 extend-filesystems[1260]: Found vda3 Jun 25 16:18:34.300029 extend-filesystems[1260]: Found usr Jun 25 16:18:34.300029 extend-filesystems[1260]: Found vda4 Jun 25 16:18:34.300029 extend-filesystems[1260]: Found vda6 Jun 25 16:18:34.300029 extend-filesystems[1260]: Found vda7 Jun 25 16:18:34.300029 extend-filesystems[1260]: Found vda9 Jun 25 16:18:34.300029 extend-filesystems[1260]: Checking size of /dev/vda9 Jun 25 16:18:34.339331 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1114) Jun 25 16:18:34.300098 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 16:18:34.339531 extend-filesystems[1260]: Resized partition /dev/vda9 Jun 25 16:18:34.306771 dbus-daemon[1258]: [system] SELinux support is enabled Jun 25 16:18:34.346443 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jun 25 16:18:34.301740 systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 16:18:34.346556 extend-filesystems[1284]: resize2fs 1.47.0 (5-Feb-2023) Jun 25 16:18:34.301790 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 16:18:34.348105 update_engine[1273]: I0625 16:18:34.321189 1273 main.cc:92] Flatcar Update Engine starting Jun 25 16:18:34.348105 update_engine[1273]: I0625 16:18:34.330687 1273 update_check_scheduler.cc:74] Next update check in 2m54s Jun 25 16:18:34.302198 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 16:18:34.348452 jq[1275]: true Jun 25 16:18:34.303054 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 16:18:34.306227 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 16:18:34.348772 jq[1283]: true Jun 25 16:18:34.309313 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 16:18:34.316457 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 16:18:34.316692 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 16:18:34.317068 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 16:18:34.317294 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 16:18:34.320075 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 16:18:34.320467 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 16:18:34.326343 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 16:18:34.326372 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 16:18:34.334384 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 16:18:34.334413 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 16:18:34.357254 tar[1281]: linux-amd64/helm Jun 25 16:18:34.357334 systemd[1]: Started update-engine.service - Update Engine. Jun 25 16:18:34.365492 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 16:18:34.381850 systemd-logind[1271]: Watching system buttons on /dev/input/event1 (Power Button) Jun 25 16:18:34.381878 systemd-logind[1271]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 16:18:34.383053 systemd-logind[1271]: New seat seat0. Jun 25 16:18:34.389567 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 16:18:34.400787 locksmithd[1293]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 16:18:34.408253 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jun 25 16:18:34.430762 extend-filesystems[1284]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 25 16:18:34.430762 extend-filesystems[1284]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 25 16:18:34.430762 extend-filesystems[1284]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jun 25 16:18:34.438641 extend-filesystems[1260]: Resized filesystem in /dev/vda9 Jun 25 16:18:34.439640 bash[1303]: Updated "/home/core/.ssh/authorized_keys" Jun 25 16:18:34.432031 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 16:18:34.432245 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 16:18:34.435323 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 16:18:34.439361 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 25 16:18:34.521057 containerd[1287]: time="2024-06-25T16:18:34.520969034Z" level=info msg="starting containerd" revision=99b8088b873ba42b788f29ccd0dc26ebb6952f1e version=v1.7.13 Jun 25 16:18:34.542314 containerd[1287]: time="2024-06-25T16:18:34.542212097Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 16:18:34.542314 containerd[1287]: time="2024-06-25T16:18:34.542272841Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:18:34.543518 containerd[1287]: time="2024-06-25T16:18:34.543491485Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:18:34.543555 containerd[1287]: time="2024-06-25T16:18:34.543517043Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:18:34.543779 containerd[1287]: time="2024-06-25T16:18:34.543754368Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:18:34.544039 containerd[1287]: time="2024-06-25T16:18:34.543778744Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 16:18:34.544039 containerd[1287]: time="2024-06-25T16:18:34.543859906Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 16:18:34.544039 containerd[1287]: time="2024-06-25T16:18:34.543910471Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:18:34.544039 containerd[1287]: time="2024-06-25T16:18:34.543921441Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 16:18:34.544039 containerd[1287]: time="2024-06-25T16:18:34.543972847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:18:34.544173 containerd[1287]: time="2024-06-25T16:18:34.544153546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 16:18:34.544204 containerd[1287]: time="2024-06-25T16:18:34.544175848Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 16:18:34.544204 containerd[1287]: time="2024-06-25T16:18:34.544184675Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 16:18:34.544324 containerd[1287]: time="2024-06-25T16:18:34.544304740Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 16:18:34.544358 containerd[1287]: time="2024-06-25T16:18:34.544322954Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 16:18:34.544377 containerd[1287]: time="2024-06-25T16:18:34.544366105Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 16:18:34.544395 containerd[1287]: time="2024-06-25T16:18:34.544377276Z" level=info msg="metadata content store policy set" policy=shared Jun 25 16:18:34.550435 containerd[1287]: time="2024-06-25T16:18:34.550410766Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 16:18:34.550478 containerd[1287]: time="2024-06-25T16:18:34.550436875Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 16:18:34.550478 containerd[1287]: time="2024-06-25T16:18:34.550449459Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 16:18:34.550478 containerd[1287]: time="2024-06-25T16:18:34.550473404Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 16:18:34.550540 containerd[1287]: time="2024-06-25T16:18:34.550490556Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 16:18:34.550540 containerd[1287]: time="2024-06-25T16:18:34.550500054Z" level=info msg="NRI interface is disabled by configuration." Jun 25 16:18:34.550540 containerd[1287]: time="2024-06-25T16:18:34.550510754Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 16:18:34.550628 containerd[1287]: time="2024-06-25T16:18:34.550598358Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 16:18:34.550657 containerd[1287]: time="2024-06-25T16:18:34.550633153Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 16:18:34.550657 containerd[1287]: time="2024-06-25T16:18:34.550644725Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 16:18:34.550695 containerd[1287]: time="2024-06-25T16:18:34.550656938Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 16:18:34.550695 containerd[1287]: time="2024-06-25T16:18:34.550670403Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 16:18:34.550695 containerd[1287]: time="2024-06-25T16:18:34.550684980Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 16:18:34.550746 containerd[1287]: time="2024-06-25T16:18:34.550695710Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 16:18:34.550746 containerd[1287]: time="2024-06-25T16:18:34.550706801Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 16:18:34.550746 containerd[1287]: time="2024-06-25T16:18:34.550718403Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 16:18:34.550746 containerd[1287]: time="2024-06-25T16:18:34.550729414Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 16:18:34.550746 containerd[1287]: time="2024-06-25T16:18:34.550740534Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 16:18:34.550831 containerd[1287]: time="2024-06-25T16:18:34.550750032Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 16:18:34.550850 containerd[1287]: time="2024-06-25T16:18:34.550827948Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 16:18:34.551985 containerd[1287]: time="2024-06-25T16:18:34.551349536Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 16:18:34.551985 containerd[1287]: time="2024-06-25T16:18:34.551381656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 16:18:34.551985 containerd[1287]: time="2024-06-25T16:18:34.551397416Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 16:18:34.551985 containerd[1287]: time="2024-06-25T16:18:34.551423565Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 16:18:34.551985 containerd[1287]: time="2024-06-25T16:18:34.551473949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 16:18:34.551985 containerd[1287]: time="2024-06-25T16:18:34.551489639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 16:18:34.551985 containerd[1287]: time="2024-06-25T16:18:34.551504326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 16:18:34.551985 containerd[1287]: time="2024-06-25T16:18:34.551552406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 16:18:34.551985 containerd[1287]: time="2024-06-25T16:18:34.551602701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 16:18:34.551985 containerd[1287]: time="2024-06-25T16:18:34.551691868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 16:18:34.551985 containerd[1287]: time="2024-06-25T16:18:34.551715873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 16:18:34.551985 containerd[1287]: time="2024-06-25T16:18:34.551730841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 16:18:34.551985 containerd[1287]: time="2024-06-25T16:18:34.551746410Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 16:18:34.551985 containerd[1287]: time="2024-06-25T16:18:34.551872897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 16:18:34.552279 containerd[1287]: time="2024-06-25T16:18:34.551895139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 16:18:34.552279 containerd[1287]: time="2024-06-25T16:18:34.551909175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 16:18:34.552279 containerd[1287]: time="2024-06-25T16:18:34.551935735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 16:18:34.552279 containerd[1287]: time="2024-06-25T16:18:34.551950733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 16:18:34.552279 containerd[1287]: time="2024-06-25T16:18:34.551977343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 16:18:34.552279 containerd[1287]: time="2024-06-25T16:18:34.551991119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 16:18:34.552279 containerd[1287]: time="2024-06-25T16:18:34.552004805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 16:18:34.552403 containerd[1287]: time="2024-06-25T16:18:34.552259161Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 16:18:34.552403 containerd[1287]: time="2024-06-25T16:18:34.552317992Z" level=info msg="Connect containerd service" Jun 25 16:18:34.552403 containerd[1287]: time="2024-06-25T16:18:34.552346816Z" level=info msg="using legacy CRI server" Jun 25 16:18:34.552403 containerd[1287]: time="2024-06-25T16:18:34.552352867Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 16:18:34.552753 containerd[1287]: time="2024-06-25T16:18:34.552643983Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 16:18:34.553135 containerd[1287]: time="2024-06-25T16:18:34.553110357Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 16:18:34.553846 containerd[1287]: time="2024-06-25T16:18:34.553824706Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 16:18:34.553904 containerd[1287]: time="2024-06-25T16:18:34.553849863Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jun 25 16:18:34.553904 containerd[1287]: time="2024-06-25T16:18:34.553860654Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 16:18:34.553904 containerd[1287]: time="2024-06-25T16:18:34.553869961Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin" Jun 25 16:18:34.554132 containerd[1287]: time="2024-06-25T16:18:34.553994204Z" level=info msg="Start subscribing containerd event" Jun 25 16:18:34.554132 containerd[1287]: time="2024-06-25T16:18:34.554066119Z" level=info msg="Start recovering state" Jun 25 16:18:34.554201 containerd[1287]: time="2024-06-25T16:18:34.554153252Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 16:18:34.556121 containerd[1287]: time="2024-06-25T16:18:34.554297312Z" level=info msg="Start event monitor" Jun 25 16:18:34.556121 containerd[1287]: time="2024-06-25T16:18:34.554320295Z" level=info msg="Start snapshots syncer" Jun 25 16:18:34.556121 containerd[1287]: time="2024-06-25T16:18:34.554329472Z" level=info msg="Start cni network conf syncer for default" Jun 25 16:18:34.556121 containerd[1287]: time="2024-06-25T16:18:34.554328831Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 16:18:34.556121 containerd[1287]: time="2024-06-25T16:18:34.554335323Z" level=info msg="Start streaming server" Jun 25 16:18:34.556121 containerd[1287]: time="2024-06-25T16:18:34.554398863Z" level=info msg="containerd successfully booted in 0.034655s" Jun 25 16:18:34.554472 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 16:18:34.773794 tar[1281]: linux-amd64/LICENSE Jun 25 16:18:34.773942 tar[1281]: linux-amd64/README.md Jun 25 16:18:34.785857 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 16:18:34.981948 sshd_keygen[1280]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 16:18:34.983362 systemd-networkd[1115]: eth0: Gained IPv6LL Jun 25 16:18:34.985216 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 16:18:34.986920 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 16:18:35.000593 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jun 25 16:18:35.003392 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:18:35.005564 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 16:18:35.007477 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 16:18:35.012385 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 16:18:35.024869 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 16:18:35.025055 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 16:18:35.026796 systemd[1]: coreos-metadata.service: Deactivated successfully. Jun 25 16:18:35.026958 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jun 25 16:18:35.028653 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 16:18:35.030105 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 16:18:35.033047 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 16:18:35.037924 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 16:18:35.047809 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 16:18:35.050935 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 16:18:35.052501 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 16:18:35.608382 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:18:35.609899 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 16:18:35.612965 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... Jun 25 16:18:35.620486 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jun 25 16:18:35.620675 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. Jun 25 16:18:35.622042 systemd[1]: Startup finished in 820ms (kernel) + 4.639s (initrd) + 3.822s (userspace) = 9.282s. Jun 25 16:18:36.128126 kubelet[1346]: E0625 16:18:36.128009 1346 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:18:36.129902 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:18:36.130018 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:18:44.138014 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 16:18:44.139299 systemd[1]: Started sshd@0-10.0.0.61:22-10.0.0.1:43744.service - OpenSSH per-connection server daemon (10.0.0.1:43744). Jun 25 16:18:44.174520 sshd[1356]: Accepted publickey for core from 10.0.0.1 port 43744 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:18:44.176416 sshd[1356]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:18:44.186449 systemd-logind[1271]: New session 1 of user core. Jun 25 16:18:44.187733 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 16:18:44.197603 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 16:18:44.208705 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 16:18:44.222616 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 16:18:44.225165 (systemd)[1359]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:18:44.297258 systemd[1359]: Queued start job for default target default.target. Jun 25 16:18:44.312915 systemd[1359]: Reached target paths.target - Paths. Jun 25 16:18:44.312952 systemd[1359]: Reached target sockets.target - Sockets. Jun 25 16:18:44.312967 systemd[1359]: Reached target timers.target - Timers. Jun 25 16:18:44.312977 systemd[1359]: Reached target basic.target - Basic System. Jun 25 16:18:44.313038 systemd[1359]: Reached target default.target - Main User Target. Jun 25 16:18:44.313066 systemd[1359]: Startup finished in 82ms. Jun 25 16:18:44.313177 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 16:18:44.314815 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 16:18:44.385788 systemd[1]: Started sshd@1-10.0.0.61:22-10.0.0.1:43756.service - OpenSSH per-connection server daemon (10.0.0.1:43756). Jun 25 16:18:44.411890 sshd[1368]: Accepted publickey for core from 10.0.0.1 port 43756 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:18:44.412973 sshd[1368]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:18:44.416523 systemd-logind[1271]: New session 2 of user core. Jun 25 16:18:44.431382 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 16:18:44.486479 sshd[1368]: pam_unix(sshd:session): session closed for user core Jun 25 16:18:44.496318 systemd[1]: sshd@1-10.0.0.61:22-10.0.0.1:43756.service: Deactivated successfully. Jun 25 16:18:44.497044 systemd[1]: session-2.scope: Deactivated successfully. Jun 25 16:18:44.497662 systemd-logind[1271]: Session 2 logged out. Waiting for processes to exit. Jun 25 16:18:44.499743 systemd[1]: Started sshd@2-10.0.0.61:22-10.0.0.1:43770.service - OpenSSH per-connection server daemon (10.0.0.1:43770). Jun 25 16:18:44.500649 systemd-logind[1271]: Removed session 2. Jun 25 16:18:44.525513 sshd[1374]: Accepted publickey for core from 10.0.0.1 port 43770 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:18:44.526802 sshd[1374]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:18:44.531025 systemd-logind[1271]: New session 3 of user core. Jun 25 16:18:44.541569 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 16:18:44.594206 sshd[1374]: pam_unix(sshd:session): session closed for user core Jun 25 16:18:44.614773 systemd[1]: sshd@2-10.0.0.61:22-10.0.0.1:43770.service: Deactivated successfully. Jun 25 16:18:44.615430 systemd[1]: session-3.scope: Deactivated successfully. Jun 25 16:18:44.615911 systemd-logind[1271]: Session 3 logged out. Waiting for processes to exit. Jun 25 16:18:44.616981 systemd[1]: Started sshd@3-10.0.0.61:22-10.0.0.1:43782.service - OpenSSH per-connection server daemon (10.0.0.1:43782). Jun 25 16:18:44.617685 systemd-logind[1271]: Removed session 3. Jun 25 16:18:44.643800 sshd[1380]: Accepted publickey for core from 10.0.0.1 port 43782 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:18:44.644822 sshd[1380]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:18:44.648484 systemd-logind[1271]: New session 4 of user core. Jun 25 16:18:44.655347 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 16:18:44.709021 sshd[1380]: pam_unix(sshd:session): session closed for user core Jun 25 16:18:44.720638 systemd[1]: sshd@3-10.0.0.61:22-10.0.0.1:43782.service: Deactivated successfully. Jun 25 16:18:44.721369 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 16:18:44.721936 systemd-logind[1271]: Session 4 logged out. Waiting for processes to exit. Jun 25 16:18:44.723383 systemd[1]: Started sshd@4-10.0.0.61:22-10.0.0.1:43792.service - OpenSSH per-connection server daemon (10.0.0.1:43792). Jun 25 16:18:44.724105 systemd-logind[1271]: Removed session 4. Jun 25 16:18:44.748408 sshd[1386]: Accepted publickey for core from 10.0.0.1 port 43792 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:18:44.749720 sshd[1386]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:18:44.753319 systemd-logind[1271]: New session 5 of user core. Jun 25 16:18:44.763394 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 16:18:44.822610 sudo[1389]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 16:18:44.822860 sudo[1389]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:18:44.835645 sudo[1389]: pam_unix(sudo:session): session closed for user root Jun 25 16:18:44.837431 sshd[1386]: pam_unix(sshd:session): session closed for user core Jun 25 16:18:44.849538 systemd[1]: sshd@4-10.0.0.61:22-10.0.0.1:43792.service: Deactivated successfully. Jun 25 16:18:44.850169 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 16:18:44.850715 systemd-logind[1271]: Session 5 logged out. Waiting for processes to exit. Jun 25 16:18:44.852273 systemd[1]: Started sshd@5-10.0.0.61:22-10.0.0.1:43804.service - OpenSSH per-connection server daemon (10.0.0.1:43804). Jun 25 16:18:44.852981 systemd-logind[1271]: Removed session 5. Jun 25 16:18:44.876637 sshd[1393]: Accepted publickey for core from 10.0.0.1 port 43804 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:18:44.877623 sshd[1393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:18:44.880884 systemd-logind[1271]: New session 6 of user core. Jun 25 16:18:44.890468 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 16:18:44.946531 sudo[1397]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 16:18:44.946806 sudo[1397]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:18:44.949827 sudo[1397]: pam_unix(sudo:session): session closed for user root Jun 25 16:18:44.954508 sudo[1396]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 16:18:44.954731 sudo[1396]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:18:44.970582 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 16:18:44.970000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 16:18:44.971881 auditctl[1400]: No rules Jun 25 16:18:44.972063 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 16:18:44.972206 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 16:18:44.972521 kernel: kauditd_printk_skb: 137 callbacks suppressed Jun 25 16:18:44.972565 kernel: audit: type=1305 audit(1719332324.970:187): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 16:18:44.973645 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 16:18:44.970000 audit[1400]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffec83f8ac0 a2=420 a3=0 items=0 ppid=1 pid=1400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:44.978463 kernel: audit: type=1300 audit(1719332324.970:187): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffec83f8ac0 a2=420 a3=0 items=0 ppid=1 pid=1400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:44.970000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jun 25 16:18:44.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:44.982531 kernel: audit: type=1327 audit(1719332324.970:187): proctitle=2F7362696E2F617564697463746C002D44 Jun 25 16:18:44.982567 kernel: audit: type=1131 audit(1719332324.971:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:44.993640 augenrules[1417]: No rules Jun 25 16:18:44.994358 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 16:18:44.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:44.995542 sudo[1396]: pam_unix(sudo:session): session closed for user root Jun 25 16:18:44.997112 sshd[1393]: pam_unix(sshd:session): session closed for user core Jun 25 16:18:44.994000 audit[1396]: USER_END pid=1396 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:18:45.000928 kernel: audit: type=1130 audit(1719332324.993:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:45.001000 kernel: audit: type=1106 audit(1719332324.994:190): pid=1396 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:18:45.001035 kernel: audit: type=1104 audit(1719332324.994:191): pid=1396 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:18:44.994000 audit[1396]: CRED_DISP pid=1396 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:18:44.996000 audit[1393]: USER_END pid=1393 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:18:45.007655 kernel: audit: type=1106 audit(1719332324.996:192): pid=1393 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:18:45.007700 kernel: audit: type=1104 audit(1719332324.996:193): pid=1393 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:18:44.996000 audit[1393]: CRED_DISP pid=1393 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:18:45.010392 systemd[1]: sshd@5-10.0.0.61:22-10.0.0.1:43804.service: Deactivated successfully. Jun 25 16:18:45.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.61:22-10.0.0.1:43804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:45.010901 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 16:18:45.011349 systemd-logind[1271]: Session 6 logged out. Waiting for processes to exit. Jun 25 16:18:45.012606 systemd[1]: Started sshd@6-10.0.0.61:22-10.0.0.1:43812.service - OpenSSH per-connection server daemon (10.0.0.1:43812). Jun 25 16:18:45.013418 systemd-logind[1271]: Removed session 6. Jun 25 16:18:45.013569 kernel: audit: type=1131 audit(1719332325.009:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.61:22-10.0.0.1:43804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:45.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.61:22-10.0.0.1:43812 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:45.036000 audit[1424]: USER_ACCT pid=1424 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:18:45.037529 sshd[1424]: Accepted publickey for core from 10.0.0.1 port 43812 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:18:45.037000 audit[1424]: CRED_ACQ pid=1424 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:18:45.037000 audit[1424]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdf0f926c0 a2=3 a3=7f45020e7480 items=0 ppid=1 pid=1424 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:45.037000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:18:45.038735 sshd[1424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:18:45.042015 systemd-logind[1271]: New session 7 of user core. Jun 25 16:18:45.048364 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 16:18:45.051000 audit[1424]: USER_START pid=1424 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:18:45.053000 audit[1426]: CRED_ACQ pid=1426 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:18:45.100000 audit[1427]: USER_ACCT pid=1427 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:18:45.100000 audit[1427]: CRED_REFR pid=1427 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:18:45.101338 sudo[1427]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 16:18:45.101547 sudo[1427]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 16:18:45.101000 audit[1427]: USER_START pid=1427 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:18:45.191844 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 16:18:45.414760 dockerd[1437]: time="2024-06-25T16:18:45.414585333Z" level=info msg="Starting up" Jun 25 16:18:45.457543 dockerd[1437]: time="2024-06-25T16:18:45.457493230Z" level=info msg="Loading containers: start." Jun 25 16:18:45.502000 audit[1472]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1472 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:18:45.502000 audit[1472]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe23b97760 a2=0 a3=7f386146be90 items=0 ppid=1437 pid=1472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:45.502000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jun 25 16:18:45.504000 audit[1474]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1474 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:18:45.504000 audit[1474]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe4ff0f050 a2=0 a3=7f5918f0be90 items=0 ppid=1437 pid=1474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:45.504000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jun 25 16:18:45.505000 audit[1476]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1476 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:18:45.505000 audit[1476]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff524fbaf0 a2=0 a3=7fc19d446e90 items=0 ppid=1437 pid=1476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:45.505000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 16:18:45.507000 audit[1478]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1478 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:18:45.507000 audit[1478]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff331a9e50 a2=0 a3=7fca4ef8de90 items=0 ppid=1437 pid=1478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:45.507000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 16:18:45.509000 audit[1480]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1480 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:18:45.509000 audit[1480]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd15f04480 a2=0 a3=7ffb7f6a3e90 items=0 ppid=1437 pid=1480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:45.509000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jun 25 16:18:45.511000 audit[1482]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1482 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:18:45.511000 audit[1482]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc58e04610 a2=0 a3=7f10087ebe90 items=0 ppid=1437 pid=1482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:45.511000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jun 25 16:18:45.522000 audit[1484]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1484 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:18:45.522000 audit[1484]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe8f1ea400 a2=0 a3=7f0a621a0e90 items=0 ppid=1437 pid=1484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:45.522000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jun 25 16:18:45.523000 audit[1486]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1486 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:18:45.523000 audit[1486]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc1ec2d060 a2=0 a3=7fe0622e6e90 items=0 ppid=1437 pid=1486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:45.523000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jun 25 16:18:45.525000 audit[1488]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1488 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:18:45.525000 audit[1488]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffc86a15e10 a2=0 a3=7f9d11c9ee90 items=0 ppid=1437 pid=1488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:45.525000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:18:45.534000 audit[1492]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1492 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:18:45.534000 audit[1492]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd91020720 a2=0 a3=7f1994508e90 items=0 ppid=1437 pid=1492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:45.534000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:18:45.535000 audit[1493]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1493 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:18:45.535000 audit[1493]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffdd6f2a720 a2=0 a3=7f28402f8e90 items=0 ppid=1437 pid=1493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:45.535000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:18:45.543247 kernel: Initializing XFRM netlink socket Jun 25 16:18:45.572000 audit[1501]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1501 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:18:45.572000 audit[1501]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffcd2957680 a2=0 a3=7f32b0aa7e90 items=0 ppid=1437 pid=1501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:45.572000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jun 25 16:18:45.584000 audit[1504]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1504 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:18:45.584000 audit[1504]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffc5b861a00 a2=0 a3=7fd455f12e90 items=0 ppid=1437 pid=1504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:45.584000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jun 25 16:18:45.588000 audit[1508]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1508 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:18:45.588000 audit[1508]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd2d140e20 a2=0 a3=7f2fa9625e90 items=0 ppid=1437 pid=1508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:45.588000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jun 25 16:18:45.590000 audit[1510]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1510 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:18:45.590000 audit[1510]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe1de3f8d0 a2=0 a3=7fe268ba7e90 items=0 ppid=1437 pid=1510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:45.590000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jun 25 16:18:45.591000 audit[1512]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1512 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:18:45.591000 audit[1512]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7fff1a47b7f0 a2=0 a3=7efe0db5be90 items=0 ppid=1437 pid=1512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:45.591000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jun 25 16:18:45.593000 audit[1514]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1514 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:18:45.593000 audit[1514]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffe543486f0 a2=0 a3=7f025d4a0e90 items=0 ppid=1437 pid=1514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:45.593000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jun 25 16:18:45.595000 audit[1516]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1516 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:18:45.595000 audit[1516]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7fffeb14b020 a2=0 a3=7f3ed7502e90 items=0 ppid=1437 pid=1516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:45.595000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jun 25 16:18:45.600000 audit[1519]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1519 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:18:45.600000 audit[1519]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7fffe6b356e0 a2=0 a3=7fa2aa55ce90 items=0 ppid=1437 pid=1519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:45.600000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jun 25 16:18:45.602000 audit[1521]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1521 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:18:45.602000 audit[1521]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffc1fcba0c0 a2=0 a3=7fa972df9e90 items=0 ppid=1437 pid=1521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:45.602000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 16:18:45.603000 audit[1523]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1523 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:18:45.603000 audit[1523]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffd4b066340 a2=0 a3=7fcc9f225e90 items=0 ppid=1437 pid=1523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:45.603000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 16:18:45.605000 audit[1525]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1525 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:18:45.605000 audit[1525]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe8496ee30 a2=0 a3=7f1fb8941e90 items=0 ppid=1437 pid=1525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:45.605000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jun 25 16:18:45.607125 systemd-networkd[1115]: docker0: Link UP Jun 25 16:18:45.615000 audit[1529]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1529 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:18:45.615000 audit[1529]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd9cc2c440 a2=0 a3=7fd526487e90 items=0 ppid=1437 pid=1529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:45.615000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:18:45.616000 audit[1530]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1530 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:18:45.616000 audit[1530]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffdf2ac28d0 a2=0 a3=7f2ce4587e90 items=0 ppid=1437 pid=1530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:18:45.616000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 16:18:45.617841 dockerd[1437]: time="2024-06-25T16:18:45.617812432Z" level=info msg="Loading containers: done." Jun 25 16:18:45.670359 dockerd[1437]: time="2024-06-25T16:18:45.670257147Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 16:18:45.670494 dockerd[1437]: time="2024-06-25T16:18:45.670475736Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 16:18:45.670615 dockerd[1437]: time="2024-06-25T16:18:45.670588999Z" level=info msg="Daemon has completed initialization" Jun 25 16:18:45.706019 dockerd[1437]: time="2024-06-25T16:18:45.705947313Z" level=info msg="API listen on /run/docker.sock" Jun 25 16:18:45.706146 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 16:18:45.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:46.331864 containerd[1287]: time="2024-06-25T16:18:46.331800334Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jun 25 16:18:46.381014 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 16:18:46.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:46.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:46.381290 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:18:46.394520 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:18:46.488626 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:18:46.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:46.728636 kubelet[1585]: E0625 16:18:46.728475 1585 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:18:46.732076 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:18:46.732229 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:18:46.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:18:47.155574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2731596202.mount: Deactivated successfully. Jun 25 16:18:48.717109 containerd[1287]: time="2024-06-25T16:18:48.717049113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:18:48.718103 containerd[1287]: time="2024-06-25T16:18:48.718017339Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=34605178" Jun 25 16:18:48.719516 containerd[1287]: time="2024-06-25T16:18:48.719474701Z" level=info msg="ImageCreate event name:\"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:18:48.721694 containerd[1287]: time="2024-06-25T16:18:48.721663273Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:18:48.723845 containerd[1287]: time="2024-06-25T16:18:48.723799107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:18:48.725055 containerd[1287]: time="2024-06-25T16:18:48.725006340Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"34601978\" in 2.39314914s" Jun 25 16:18:48.725121 containerd[1287]: time="2024-06-25T16:18:48.725058047Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jun 25 16:18:48.760335 containerd[1287]: time="2024-06-25T16:18:48.760283212Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jun 25 16:18:51.175142 containerd[1287]: time="2024-06-25T16:18:51.175059936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:18:51.176023 containerd[1287]: time="2024-06-25T16:18:51.175952239Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=31719491" Jun 25 16:18:51.177501 containerd[1287]: time="2024-06-25T16:18:51.177468421Z" level=info msg="ImageCreate event name:\"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:18:51.179525 containerd[1287]: time="2024-06-25T16:18:51.179486885Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:18:51.181437 containerd[1287]: time="2024-06-25T16:18:51.181416001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:18:51.182443 containerd[1287]: time="2024-06-25T16:18:51.182407440Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"33315989\" in 2.42207771s" Jun 25 16:18:51.182500 containerd[1287]: time="2024-06-25T16:18:51.182448397Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jun 25 16:18:51.208130 containerd[1287]: time="2024-06-25T16:18:51.208057484Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jun 25 16:18:52.847098 containerd[1287]: time="2024-06-25T16:18:52.846999085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:18:52.847926 containerd[1287]: time="2024-06-25T16:18:52.847854428Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=16925505" Jun 25 16:18:52.849562 containerd[1287]: time="2024-06-25T16:18:52.849529659Z" level=info msg="ImageCreate event name:\"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:18:52.852285 containerd[1287]: time="2024-06-25T16:18:52.852131927Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:18:52.854756 containerd[1287]: time="2024-06-25T16:18:52.854703317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:18:52.856325 containerd[1287]: time="2024-06-25T16:18:52.856235830Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"18522021\" in 1.648106641s" Jun 25 16:18:52.856325 containerd[1287]: time="2024-06-25T16:18:52.856309448Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jun 25 16:18:52.879595 containerd[1287]: time="2024-06-25T16:18:52.879545437Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jun 25 16:18:54.505872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount141312489.mount: Deactivated successfully. Jun 25 16:18:56.501536 containerd[1287]: time="2024-06-25T16:18:56.501472587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:18:56.505166 containerd[1287]: time="2024-06-25T16:18:56.505033132Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=28118419" Jun 25 16:18:56.507181 containerd[1287]: time="2024-06-25T16:18:56.507090429Z" level=info msg="ImageCreate event name:\"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:18:56.511160 containerd[1287]: time="2024-06-25T16:18:56.510991953Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:18:56.514588 containerd[1287]: time="2024-06-25T16:18:56.514495941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:18:56.515531 containerd[1287]: time="2024-06-25T16:18:56.515473404Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"28117438\" in 3.635875809s" Jun 25 16:18:56.515531 containerd[1287]: time="2024-06-25T16:18:56.515524740Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jun 25 16:18:56.551095 containerd[1287]: time="2024-06-25T16:18:56.551032534Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 16:18:56.983176 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 16:18:56.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:56.983461 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:18:56.987260 kernel: kauditd_printk_skb: 88 callbacks suppressed Jun 25 16:18:56.987361 kernel: audit: type=1130 audit(1719332336.982:233): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:56.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:56.992759 kernel: audit: type=1131 audit(1719332336.982:234): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:56.993814 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:18:57.079092 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:18:57.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:57.112847 kernel: audit: type=1130 audit(1719332337.078:235): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:18:57.363949 kubelet[1691]: E0625 16:18:57.363515 1691 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:18:57.365920 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:18:57.366126 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:18:57.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:18:57.370265 kernel: audit: type=1131 audit(1719332337.365:236): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:18:59.158708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3679523180.mount: Deactivated successfully. Jun 25 16:18:59.170716 containerd[1287]: time="2024-06-25T16:18:59.170619985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:18:59.172177 containerd[1287]: time="2024-06-25T16:18:59.172023827Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jun 25 16:18:59.209003 containerd[1287]: time="2024-06-25T16:18:59.208931947Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:18:59.221686 containerd[1287]: time="2024-06-25T16:18:59.221512451Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:18:59.225607 containerd[1287]: time="2024-06-25T16:18:59.225419505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:18:59.227197 containerd[1287]: time="2024-06-25T16:18:59.226499680Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 2.675390411s" Jun 25 16:18:59.227197 containerd[1287]: time="2024-06-25T16:18:59.226582014Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 25 16:18:59.260897 containerd[1287]: time="2024-06-25T16:18:59.260776236Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 16:19:00.093317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4092512316.mount: Deactivated successfully. Jun 25 16:19:02.908459 containerd[1287]: time="2024-06-25T16:19:02.908370342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:19:02.909316 containerd[1287]: time="2024-06-25T16:19:02.909255051Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jun 25 16:19:02.910784 containerd[1287]: time="2024-06-25T16:19:02.910734825Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:19:02.913273 containerd[1287]: time="2024-06-25T16:19:02.913190478Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:19:02.915387 containerd[1287]: time="2024-06-25T16:19:02.915343805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:19:02.916595 containerd[1287]: time="2024-06-25T16:19:02.916532463Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.655691276s" Jun 25 16:19:02.916684 containerd[1287]: time="2024-06-25T16:19:02.916594690Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jun 25 16:19:02.936753 containerd[1287]: time="2024-06-25T16:19:02.936692646Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jun 25 16:19:04.290772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3146108996.mount: Deactivated successfully. Jun 25 16:19:07.459105 containerd[1287]: time="2024-06-25T16:19:07.459039824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:19:07.551612 containerd[1287]: time="2024-06-25T16:19:07.551513414Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749" Jun 25 16:19:07.601536 containerd[1287]: time="2024-06-25T16:19:07.601479693Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:19:07.617638 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 25 16:19:07.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:07.617906 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:19:07.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:07.625235 kernel: audit: type=1130 audit(1719332347.616:237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:07.625302 kernel: audit: type=1131 audit(1719332347.616:238): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:07.631646 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:19:07.646579 containerd[1287]: time="2024-06-25T16:19:07.646507419Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:19:07.729401 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:19:07.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:07.733253 kernel: audit: type=1130 audit(1719332347.728:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:07.835056 kubelet[1775]: E0625 16:19:07.834993 1775 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 16:19:07.837019 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 16:19:07.837174 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 16:19:07.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:19:07.855260 kernel: audit: type=1131 audit(1719332347.836:240): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 16:19:08.789575 containerd[1287]: time="2024-06-25T16:19:08.789469757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:19:08.790496 containerd[1287]: time="2024-06-25T16:19:08.790452086Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 5.853702512s" Jun 25 16:19:08.790563 containerd[1287]: time="2024-06-25T16:19:08.790506851Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jun 25 16:19:11.063506 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:19:11.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:11.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:11.069115 kernel: audit: type=1130 audit(1719332351.062:241): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:11.069170 kernel: audit: type=1131 audit(1719332351.062:242): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:11.079683 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:19:11.098312 systemd[1]: Reloading. Jun 25 16:19:11.917015 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:19:11.996000 audit: BPF prog-id=44 op=LOAD Jun 25 16:19:11.996000 audit: BPF prog-id=30 op=UNLOAD Jun 25 16:19:11.998811 kernel: audit: type=1334 audit(1719332351.996:243): prog-id=44 op=LOAD Jun 25 16:19:11.998861 kernel: audit: type=1334 audit(1719332351.996:244): prog-id=30 op=UNLOAD Jun 25 16:19:11.998878 kernel: audit: type=1334 audit(1719332351.997:245): prog-id=45 op=LOAD Jun 25 16:19:11.997000 audit: BPF prog-id=45 op=LOAD Jun 25 16:19:11.999726 kernel: audit: type=1334 audit(1719332351.997:246): prog-id=31 op=UNLOAD Jun 25 16:19:11.997000 audit: BPF prog-id=31 op=UNLOAD Jun 25 16:19:11.997000 audit: BPF prog-id=46 op=LOAD Jun 25 16:19:11.997000 audit: BPF prog-id=47 op=LOAD Jun 25 16:19:11.997000 audit: BPF prog-id=32 op=UNLOAD Jun 25 16:19:11.997000 audit: BPF prog-id=33 op=UNLOAD Jun 25 16:19:11.999000 audit: BPF prog-id=48 op=LOAD Jun 25 16:19:11.999000 audit: BPF prog-id=49 op=LOAD Jun 25 16:19:11.999000 audit: BPF prog-id=34 op=UNLOAD Jun 25 16:19:11.999000 audit: BPF prog-id=35 op=UNLOAD Jun 25 16:19:12.002000 audit: BPF prog-id=50 op=LOAD Jun 25 16:19:12.002000 audit: BPF prog-id=39 op=UNLOAD Jun 25 16:19:12.003000 audit: BPF prog-id=51 op=LOAD Jun 25 16:19:12.003000 audit: BPF prog-id=40 op=UNLOAD Jun 25 16:19:12.004000 audit: BPF prog-id=52 op=LOAD Jun 25 16:19:12.004000 audit: BPF prog-id=36 op=UNLOAD Jun 25 16:19:12.004000 audit: BPF prog-id=53 op=LOAD Jun 25 16:19:12.004000 audit: BPF prog-id=54 op=LOAD Jun 25 16:19:12.004000 audit: BPF prog-id=37 op=UNLOAD Jun 25 16:19:12.004000 audit: BPF prog-id=38 op=UNLOAD Jun 25 16:19:12.005000 audit: BPF prog-id=55 op=LOAD Jun 25 16:19:12.005000 audit: BPF prog-id=41 op=UNLOAD Jun 25 16:19:12.006000 audit: BPF prog-id=56 op=LOAD Jun 25 16:19:12.006000 audit: BPF prog-id=57 op=LOAD Jun 25 16:19:12.006000 audit: BPF prog-id=42 op=UNLOAD Jun 25 16:19:12.006000 audit: BPF prog-id=43 op=UNLOAD Jun 25 16:19:12.029795 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:19:12.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:12.032076 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:19:12.032445 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 16:19:12.032672 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:19:12.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:12.035240 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:19:12.130657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:19:12.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:12.175855 kubelet[1922]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:19:12.175855 kubelet[1922]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 16:19:12.175855 kubelet[1922]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:19:12.176279 kubelet[1922]: I0625 16:19:12.175830 1922 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 16:19:12.482702 kubelet[1922]: I0625 16:19:12.482615 1922 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 16:19:12.482702 kubelet[1922]: I0625 16:19:12.482646 1922 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 16:19:12.482854 kubelet[1922]: I0625 16:19:12.482847 1922 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 16:19:12.537042 kubelet[1922]: I0625 16:19:12.536984 1922 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:19:12.537212 kubelet[1922]: E0625 16:19:12.537127 1922 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.61:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.61:6443: connect: connection refused Jun 25 16:19:12.554008 kubelet[1922]: I0625 16:19:12.553963 1922 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 16:19:12.554192 kubelet[1922]: I0625 16:19:12.554178 1922 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 16:19:12.554397 kubelet[1922]: I0625 16:19:12.554366 1922 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 16:19:12.554875 kubelet[1922]: I0625 16:19:12.554854 1922 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 16:19:12.554875 kubelet[1922]: I0625 16:19:12.554871 1922 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 16:19:12.555786 kubelet[1922]: I0625 16:19:12.555763 1922 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:19:12.558243 kubelet[1922]: I0625 16:19:12.558213 1922 kubelet.go:393] "Attempting to sync node with API server" Jun 25 16:19:12.558243 kubelet[1922]: I0625 16:19:12.558244 1922 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 16:19:12.558303 kubelet[1922]: I0625 16:19:12.558267 1922 kubelet.go:309] "Adding apiserver pod source" Jun 25 16:19:12.558303 kubelet[1922]: I0625 16:19:12.558281 1922 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 16:19:12.559558 kubelet[1922]: W0625 16:19:12.559497 1922 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jun 25 16:19:12.559558 kubelet[1922]: E0625 16:19:12.559557 1922 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jun 25 16:19:12.562956 kubelet[1922]: I0625 16:19:12.562920 1922 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 16:19:12.563462 kubelet[1922]: W0625 16:19:12.563429 1922 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.61:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jun 25 16:19:12.563510 kubelet[1922]: E0625 16:19:12.563471 1922 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.61:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jun 25 16:19:12.577743 kubelet[1922]: W0625 16:19:12.577699 1922 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 16:19:12.578398 kubelet[1922]: I0625 16:19:12.578358 1922 server.go:1232] "Started kubelet" Jun 25 16:19:12.578562 kubelet[1922]: I0625 16:19:12.578533 1922 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 16:19:12.578905 kubelet[1922]: I0625 16:19:12.578886 1922 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 16:19:12.578963 kubelet[1922]: I0625 16:19:12.578948 1922 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 16:19:12.579460 kubelet[1922]: I0625 16:19:12.579437 1922 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 16:19:12.594939 kubelet[1922]: I0625 16:19:12.594917 1922 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 16:19:12.595255 kubelet[1922]: E0625 16:19:12.595242 1922 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 16:19:12.595369 kubelet[1922]: E0625 16:19:12.595343 1922 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 16:19:12.595369 kubelet[1922]: I0625 16:19:12.595374 1922 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 16:19:12.595570 kubelet[1922]: I0625 16:19:12.595456 1922 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 16:19:12.596588 kubelet[1922]: W0625 16:19:12.596546 1922 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jun 25 16:19:12.596701 kubelet[1922]: E0625 16:19:12.596691 1922 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jun 25 16:19:12.597000 audit[1935]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1935 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:12.597000 audit[1935]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe0a1a2850 a2=0 a3=7fc559f68e90 items=0 ppid=1922 pid=1935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:12.597000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:19:12.598351 kubelet[1922]: E0625 16:19:12.598142 1922 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.61:6443: connect: connection refused" interval="200ms" Jun 25 16:19:12.598000 audit[1936]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1936 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:12.598000 audit[1936]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffccd8c8280 a2=0 a3=7f384f48ae90 items=0 ppid=1922 pid=1936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:12.598000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:19:12.599519 kubelet[1922]: E0625 16:19:12.598955 1922 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17dc4baa4e44935d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.June, 25, 16, 19, 12, 578327389, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 16, 19, 12, 578327389, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.61:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.61:6443: connect: connection refused'(may retry after sleeping) Jun 25 16:19:12.600348 kubelet[1922]: I0625 16:19:12.600324 1922 server.go:462] "Adding debug handlers to kubelet server" Jun 25 16:19:12.603000 audit[1939]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1939 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:12.603000 audit[1939]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffcf420b60 a2=0 a3=7f57eb8dbe90 items=0 ppid=1922 pid=1939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:12.603000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:19:12.606000 audit[1941]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:12.606000 audit[1941]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd07f36b50 a2=0 a3=7f3cb7999e90 items=0 ppid=1922 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:12.606000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:19:12.613000 audit[1944]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:12.613000 audit[1944]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fffc269c730 a2=0 a3=7f982fdabe90 items=0 ppid=1922 pid=1944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:12.613000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jun 25 16:19:12.614048 kubelet[1922]: I0625 16:19:12.614017 1922 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 16:19:12.614000 audit[1946]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1946 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:19:12.614000 audit[1946]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc545e2770 a2=0 a3=7f33f4beae90 items=0 ppid=1922 pid=1946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:12.614000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 16:19:12.615181 kubelet[1922]: I0625 16:19:12.615161 1922 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 16:19:12.615294 kubelet[1922]: I0625 16:19:12.615272 1922 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 16:19:12.615506 kubelet[1922]: I0625 16:19:12.615493 1922 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 16:19:12.615000 audit[1947]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1947 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:12.615000 audit[1947]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe1d6b2740 a2=0 a3=7f2892ff3e90 items=0 ppid=1922 pid=1947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:12.615000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 16:19:12.615835 kubelet[1922]: E0625 16:19:12.615819 1922 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 16:19:12.616000 audit[1948]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=1948 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:19:12.616000 audit[1948]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe8c6925d0 a2=0 a3=7f26dc65fe90 items=0 ppid=1922 pid=1948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:12.616000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 16:19:12.616000 audit[1949]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1949 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:12.616000 audit[1949]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe5fc4a980 a2=0 a3=7f522b379e90 items=0 ppid=1922 pid=1949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:12.616000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 16:19:12.617000 audit[1951]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=1951 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:19:12.617000 audit[1951]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffcc5bccef0 a2=0 a3=7fd0ac715e90 items=0 ppid=1922 pid=1951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:12.617000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 16:19:12.617000 audit[1952]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_chain pid=1952 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:12.617000 audit[1952]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff80f4c900 a2=0 a3=7fb70f76ee90 items=0 ppid=1922 pid=1952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:12.617000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 16:19:12.618025 kubelet[1922]: W0625 16:19:12.617769 1922 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jun 25 16:19:12.618144 kubelet[1922]: E0625 16:19:12.618130 1922 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jun 25 16:19:12.618000 audit[1954]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1954 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:19:12.619543 kernel: kauditd_printk_skb: 60 callbacks suppressed Jun 25 16:19:12.619640 kernel: audit: type=1325 audit(1719332352.618:285): table=filter:37 family=10 entries=2 op=nft_register_chain pid=1954 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:19:12.619727 kubelet[1922]: I0625 16:19:12.619700 1922 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 16:19:12.619727 kubelet[1922]: I0625 16:19:12.619715 1922 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 16:19:12.619727 kubelet[1922]: I0625 16:19:12.619729 1922 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:19:12.618000 audit[1954]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe446b90f0 a2=0 a3=7f46dc649e90 items=0 ppid=1922 pid=1954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:12.628137 kernel: audit: type=1300 audit(1719332352.618:285): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe446b90f0 a2=0 a3=7f46dc649e90 items=0 ppid=1922 pid=1954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:12.628299 kernel: audit: type=1327 audit(1719332352.618:285): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 16:19:12.618000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 16:19:12.699902 kubelet[1922]: I0625 16:19:12.699845 1922 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 16:19:12.700336 kubelet[1922]: E0625 16:19:12.700304 1922 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.61:6443/api/v1/nodes\": dial tcp 10.0.0.61:6443: connect: connection refused" node="localhost" Jun 25 16:19:12.716421 kubelet[1922]: E0625 16:19:12.716368 1922 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 16:19:12.799585 kubelet[1922]: E0625 16:19:12.799442 1922 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.61:6443: connect: connection refused" interval="400ms" Jun 25 16:19:12.901918 kubelet[1922]: I0625 16:19:12.901870 1922 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 16:19:12.902267 kubelet[1922]: E0625 16:19:12.902249 1922 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.61:6443/api/v1/nodes\": dial tcp 10.0.0.61:6443: connect: connection refused" node="localhost" Jun 25 16:19:12.909871 kubelet[1922]: I0625 16:19:12.909831 1922 policy_none.go:49] "None policy: Start" Jun 25 16:19:12.910561 kubelet[1922]: I0625 16:19:12.910533 1922 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 16:19:12.910561 kubelet[1922]: I0625 16:19:12.910556 1922 state_mem.go:35] "Initializing new in-memory state store" Jun 25 16:19:12.916477 kubelet[1922]: E0625 16:19:12.916449 1922 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 16:19:12.929515 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 16:19:12.954720 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 16:19:12.957717 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 16:19:12.969096 kubelet[1922]: I0625 16:19:12.969065 1922 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 16:19:12.969597 kubelet[1922]: I0625 16:19:12.969584 1922 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 16:19:12.971132 kubelet[1922]: E0625 16:19:12.971109 1922 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jun 25 16:19:13.200688 kubelet[1922]: E0625 16:19:13.200544 1922 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.61:6443: connect: connection refused" interval="800ms" Jun 25 16:19:13.303964 kubelet[1922]: I0625 16:19:13.303923 1922 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 16:19:13.304345 kubelet[1922]: E0625 16:19:13.304320 1922 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.61:6443/api/v1/nodes\": dial tcp 10.0.0.61:6443: connect: connection refused" node="localhost" Jun 25 16:19:13.317610 kubelet[1922]: I0625 16:19:13.317515 1922 topology_manager.go:215] "Topology Admit Handler" podUID="58a8406cbcc5d1f466b3c9fc9cd923ee" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 16:19:13.318948 kubelet[1922]: I0625 16:19:13.318902 1922 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 16:19:13.320093 kubelet[1922]: I0625 16:19:13.320078 1922 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 16:19:13.325915 systemd[1]: Created slice kubepods-burstable-pod58a8406cbcc5d1f466b3c9fc9cd923ee.slice - libcontainer container kubepods-burstable-pod58a8406cbcc5d1f466b3c9fc9cd923ee.slice. Jun 25 16:19:13.345206 systemd[1]: Created slice kubepods-burstable-podd27baad490d2d4f748c86b318d7d74ef.slice - libcontainer container kubepods-burstable-podd27baad490d2d4f748c86b318d7d74ef.slice. Jun 25 16:19:13.361675 systemd[1]: Created slice kubepods-burstable-pod9c3207d669e00aa24ded52617c0d65d0.slice - libcontainer container kubepods-burstable-pod9c3207d669e00aa24ded52617c0d65d0.slice. Jun 25 16:19:13.399954 kubelet[1922]: I0625 16:19:13.399902 1922 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/58a8406cbcc5d1f466b3c9fc9cd923ee-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"58a8406cbcc5d1f466b3c9fc9cd923ee\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:19:13.399954 kubelet[1922]: I0625 16:19:13.399964 1922 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:19:13.400183 kubelet[1922]: I0625 16:19:13.399995 1922 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:19:13.400183 kubelet[1922]: I0625 16:19:13.400022 1922 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jun 25 16:19:13.400183 kubelet[1922]: I0625 16:19:13.400047 1922 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:19:13.400183 kubelet[1922]: I0625 16:19:13.400100 1922 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/58a8406cbcc5d1f466b3c9fc9cd923ee-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"58a8406cbcc5d1f466b3c9fc9cd923ee\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:19:13.400183 kubelet[1922]: I0625 16:19:13.400155 1922 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/58a8406cbcc5d1f466b3c9fc9cd923ee-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"58a8406cbcc5d1f466b3c9fc9cd923ee\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:19:13.400353 kubelet[1922]: I0625 16:19:13.400190 1922 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:19:13.400353 kubelet[1922]: I0625 16:19:13.400241 1922 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:19:13.443793 kubelet[1922]: W0625 16:19:13.443725 1922 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jun 25 16:19:13.443793 kubelet[1922]: E0625 16:19:13.443780 1922 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jun 25 16:19:13.513133 kubelet[1922]: W0625 16:19:13.512949 1922 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.61:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jun 25 16:19:13.513133 kubelet[1922]: E0625 16:19:13.513038 1922 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.61:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jun 25 16:19:13.643684 kubelet[1922]: E0625 16:19:13.643626 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:13.644472 containerd[1287]: time="2024-06-25T16:19:13.644416300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:58a8406cbcc5d1f466b3c9fc9cd923ee,Namespace:kube-system,Attempt:0,}" Jun 25 16:19:13.655195 kubelet[1922]: W0625 16:19:13.655109 1922 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jun 25 16:19:13.655195 kubelet[1922]: E0625 16:19:13.655193 1922 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jun 25 16:19:13.659315 kubelet[1922]: E0625 16:19:13.659292 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:13.659891 containerd[1287]: time="2024-06-25T16:19:13.659850760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,}" Jun 25 16:19:13.664178 kubelet[1922]: E0625 16:19:13.664136 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:13.664683 containerd[1287]: time="2024-06-25T16:19:13.664646411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,}" Jun 25 16:19:14.001831 kubelet[1922]: E0625 16:19:14.001692 1922 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.61:6443: connect: connection refused" interval="1.6s" Jun 25 16:19:14.105473 kubelet[1922]: I0625 16:19:14.105435 1922 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 16:19:14.105794 kubelet[1922]: E0625 16:19:14.105773 1922 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.61:6443/api/v1/nodes\": dial tcp 10.0.0.61:6443: connect: connection refused" node="localhost" Jun 25 16:19:14.174389 kubelet[1922]: W0625 16:19:14.174297 1922 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jun 25 16:19:14.174389 kubelet[1922]: E0625 16:19:14.174340 1922 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jun 25 16:19:14.589040 kubelet[1922]: E0625 16:19:14.588981 1922 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.61:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.61:6443: connect: connection refused Jun 25 16:19:15.065803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3167034615.mount: Deactivated successfully. Jun 25 16:19:15.078108 containerd[1287]: time="2024-06-25T16:19:15.078047857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:19:15.079369 containerd[1287]: time="2024-06-25T16:19:15.079314719Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:19:15.080332 containerd[1287]: time="2024-06-25T16:19:15.080243023Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 16:19:15.084461 containerd[1287]: time="2024-06-25T16:19:15.084424917Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:19:15.087468 containerd[1287]: time="2024-06-25T16:19:15.087415915Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jun 25 16:19:15.093006 containerd[1287]: time="2024-06-25T16:19:15.092920505Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:19:15.094118 containerd[1287]: time="2024-06-25T16:19:15.094034735Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 16:19:15.095053 containerd[1287]: time="2024-06-25T16:19:15.095010971Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:19:15.096527 containerd[1287]: time="2024-06-25T16:19:15.096411639Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:19:15.098294 containerd[1287]: time="2024-06-25T16:19:15.098150192Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:19:15.100044 containerd[1287]: time="2024-06-25T16:19:15.100009636Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:19:15.101464 containerd[1287]: time="2024-06-25T16:19:15.101430462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:19:15.102435 containerd[1287]: time="2024-06-25T16:19:15.102383093Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.442423204s" Jun 25 16:19:15.102904 containerd[1287]: time="2024-06-25T16:19:15.102859985Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:19:15.104714 containerd[1287]: time="2024-06-25T16:19:15.104609408Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.460080072s" Jun 25 16:19:15.105478 containerd[1287]: time="2024-06-25T16:19:15.105438674Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.440701378s" Jun 25 16:19:15.105832 containerd[1287]: time="2024-06-25T16:19:15.105793672Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:19:15.106487 containerd[1287]: time="2024-06-25T16:19:15.106447051Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 16:19:15.203873 kubelet[1922]: W0625 16:19:15.203781 1922 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jun 25 16:19:15.203873 kubelet[1922]: E0625 16:19:15.203870 1922 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jun 25 16:19:15.349595 containerd[1287]: time="2024-06-25T16:19:15.348471480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:19:15.349595 containerd[1287]: time="2024-06-25T16:19:15.348563866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:19:15.349595 containerd[1287]: time="2024-06-25T16:19:15.348623790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:19:15.349595 containerd[1287]: time="2024-06-25T16:19:15.348661322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:19:15.365927 containerd[1287]: time="2024-06-25T16:19:15.365683970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:19:15.365927 containerd[1287]: time="2024-06-25T16:19:15.365924900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:19:15.366149 containerd[1287]: time="2024-06-25T16:19:15.365965728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:19:15.366149 containerd[1287]: time="2024-06-25T16:19:15.365994162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:19:15.366558 containerd[1287]: time="2024-06-25T16:19:15.366479841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:19:15.366615 containerd[1287]: time="2024-06-25T16:19:15.366591324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:19:15.366682 containerd[1287]: time="2024-06-25T16:19:15.366648683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:19:15.367419 containerd[1287]: time="2024-06-25T16:19:15.367213964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:19:15.434601 systemd[1]: Started cri-containerd-3ec39c77e179ddbe93cb022116deff133a8295ca49825d17cfb5ae652286e104.scope - libcontainer container 3ec39c77e179ddbe93cb022116deff133a8295ca49825d17cfb5ae652286e104. Jun 25 16:19:15.442513 systemd[1]: Started cri-containerd-9e04dbc9cf9c5e59cf6d0047242ac38b952b680a3af6bb836052f8d8ece0e2f4.scope - libcontainer container 9e04dbc9cf9c5e59cf6d0047242ac38b952b680a3af6bb836052f8d8ece0e2f4. Jun 25 16:19:15.451144 systemd[1]: Started cri-containerd-e6144c79a26ac761d733213b98aef3ca6a3fa889120e3891ee6aedca17516fd8.scope - libcontainer container e6144c79a26ac761d733213b98aef3ca6a3fa889120e3891ee6aedca17516fd8. Jun 25 16:19:15.455000 audit: BPF prog-id=58 op=LOAD Jun 25 16:19:15.456000 audit: BPF prog-id=59 op=LOAD Jun 25 16:19:15.459685 kernel: audit: type=1334 audit(1719332355.455:286): prog-id=58 op=LOAD Jun 25 16:19:15.459852 kernel: audit: type=1334 audit(1719332355.456:287): prog-id=59 op=LOAD Jun 25 16:19:15.459896 kernel: audit: type=1300 audit(1719332355.456:287): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=1985 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:15.456000 audit[1996]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=1985 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:15.462601 kernel: audit: type=1327 audit(1719332355.456:287): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365633339633737653137396464626539336362303232313136646566 Jun 25 16:19:15.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365633339633737653137396464626539336362303232313136646566 Jun 25 16:19:15.467505 kernel: audit: type=1334 audit(1719332355.456:288): prog-id=60 op=LOAD Jun 25 16:19:15.456000 audit: BPF prog-id=60 op=LOAD Jun 25 16:19:15.456000 audit[1996]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=1985 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:15.472090 kernel: audit: type=1300 audit(1719332355.456:288): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=1985 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:15.472143 kernel: audit: type=1327 audit(1719332355.456:288): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365633339633737653137396464626539336362303232313136646566 Jun 25 16:19:15.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365633339633737653137396464626539336362303232313136646566 Jun 25 16:19:15.456000 audit: BPF prog-id=60 op=UNLOAD Jun 25 16:19:15.456000 audit: BPF prog-id=59 op=UNLOAD Jun 25 16:19:15.456000 audit: BPF prog-id=61 op=LOAD Jun 25 16:19:15.456000 audit[1996]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=1985 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:15.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365633339633737653137396464626539336362303232313136646566 Jun 25 16:19:15.469000 audit: BPF prog-id=62 op=LOAD Jun 25 16:19:15.469000 audit: BPF prog-id=63 op=LOAD Jun 25 16:19:15.469000 audit[2024]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=1979 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:15.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965303464626339636639633565353963663664303034373234326163 Jun 25 16:19:15.469000 audit: BPF prog-id=64 op=LOAD Jun 25 16:19:15.469000 audit[2024]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=1979 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:15.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965303464626339636639633565353963663664303034373234326163 Jun 25 16:19:15.469000 audit: BPF prog-id=64 op=UNLOAD Jun 25 16:19:15.469000 audit: BPF prog-id=63 op=UNLOAD Jun 25 16:19:15.469000 audit: BPF prog-id=65 op=LOAD Jun 25 16:19:15.469000 audit[2024]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=1979 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:15.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965303464626339636639633565353963663664303034373234326163 Jun 25 16:19:15.472000 audit: BPF prog-id=66 op=LOAD Jun 25 16:19:15.476000 audit: BPF prog-id=67 op=LOAD Jun 25 16:19:15.476000 audit[2029]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=1995 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:15.476000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536313434633739613236616337363164373333323133623938616566 Jun 25 16:19:15.476000 audit: BPF prog-id=68 op=LOAD Jun 25 16:19:15.476000 audit[2029]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=1995 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:15.476000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536313434633739613236616337363164373333323133623938616566 Jun 25 16:19:15.476000 audit: BPF prog-id=68 op=UNLOAD Jun 25 16:19:15.476000 audit: BPF prog-id=67 op=UNLOAD Jun 25 16:19:15.476000 audit: BPF prog-id=69 op=LOAD Jun 25 16:19:15.476000 audit[2029]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=1995 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:15.476000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536313434633739613236616337363164373333323133623938616566 Jun 25 16:19:15.570908 containerd[1287]: time="2024-06-25T16:19:15.570852590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ec39c77e179ddbe93cb022116deff133a8295ca49825d17cfb5ae652286e104\"" Jun 25 16:19:15.572125 kubelet[1922]: E0625 16:19:15.572104 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:15.575708 containerd[1287]: time="2024-06-25T16:19:15.575663867Z" level=info msg="CreateContainer within sandbox \"3ec39c77e179ddbe93cb022116deff133a8295ca49825d17cfb5ae652286e104\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 16:19:15.579462 containerd[1287]: time="2024-06-25T16:19:15.579427461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6144c79a26ac761d733213b98aef3ca6a3fa889120e3891ee6aedca17516fd8\"" Jun 25 16:19:15.579844 containerd[1287]: time="2024-06-25T16:19:15.579820572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:58a8406cbcc5d1f466b3c9fc9cd923ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e04dbc9cf9c5e59cf6d0047242ac38b952b680a3af6bb836052f8d8ece0e2f4\"" Jun 25 16:19:15.580285 kubelet[1922]: E0625 16:19:15.580253 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:15.581026 kubelet[1922]: E0625 16:19:15.580887 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:15.582197 containerd[1287]: time="2024-06-25T16:19:15.582166045Z" level=info msg="CreateContainer within sandbox \"e6144c79a26ac761d733213b98aef3ca6a3fa889120e3891ee6aedca17516fd8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 16:19:15.583310 containerd[1287]: time="2024-06-25T16:19:15.583263574Z" level=info msg="CreateContainer within sandbox \"9e04dbc9cf9c5e59cf6d0047242ac38b952b680a3af6bb836052f8d8ece0e2f4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 16:19:15.602848 kubelet[1922]: E0625 16:19:15.602702 1922 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.61:6443: connect: connection refused" interval="3.2s" Jun 25 16:19:15.707298 containerd[1287]: time="2024-06-25T16:19:15.707244080Z" level=info msg="CreateContainer within sandbox \"3ec39c77e179ddbe93cb022116deff133a8295ca49825d17cfb5ae652286e104\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f806081cf4accb75cb7146df79de80cca0d604a3698ebda29e592d8f6ac9dc16\"" Jun 25 16:19:15.707663 kubelet[1922]: I0625 16:19:15.707628 1922 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 16:19:15.707899 containerd[1287]: time="2024-06-25T16:19:15.707871330Z" level=info msg="StartContainer for \"f806081cf4accb75cb7146df79de80cca0d604a3698ebda29e592d8f6ac9dc16\"" Jun 25 16:19:15.708117 kubelet[1922]: E0625 16:19:15.708072 1922 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.61:6443/api/v1/nodes\": dial tcp 10.0.0.61:6443: connect: connection refused" node="localhost" Jun 25 16:19:15.737444 systemd[1]: Started cri-containerd-f806081cf4accb75cb7146df79de80cca0d604a3698ebda29e592d8f6ac9dc16.scope - libcontainer container f806081cf4accb75cb7146df79de80cca0d604a3698ebda29e592d8f6ac9dc16. Jun 25 16:19:15.747000 audit: BPF prog-id=70 op=LOAD Jun 25 16:19:15.747000 audit: BPF prog-id=71 op=LOAD Jun 25 16:19:15.747000 audit[2097]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=1985 pid=2097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:15.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638303630383163663461636362373563623731343664663739646538 Jun 25 16:19:15.747000 audit: BPF prog-id=72 op=LOAD Jun 25 16:19:15.747000 audit[2097]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=1985 pid=2097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:15.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638303630383163663461636362373563623731343664663739646538 Jun 25 16:19:15.747000 audit: BPF prog-id=72 op=UNLOAD Jun 25 16:19:15.747000 audit: BPF prog-id=71 op=UNLOAD Jun 25 16:19:15.747000 audit: BPF prog-id=73 op=LOAD Jun 25 16:19:15.747000 audit[2097]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=1985 pid=2097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:15.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638303630383163663461636362373563623731343664663739646538 Jun 25 16:19:15.793795 kubelet[1922]: W0625 16:19:15.793724 1922 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.61:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jun 25 16:19:15.793795 kubelet[1922]: E0625 16:19:15.793795 1922 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.61:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jun 25 16:19:15.796561 containerd[1287]: time="2024-06-25T16:19:15.796503863Z" level=info msg="StartContainer for \"f806081cf4accb75cb7146df79de80cca0d604a3698ebda29e592d8f6ac9dc16\" returns successfully" Jun 25 16:19:15.796674 containerd[1287]: time="2024-06-25T16:19:15.796514343Z" level=info msg="CreateContainer within sandbox \"e6144c79a26ac761d733213b98aef3ca6a3fa889120e3891ee6aedca17516fd8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6ef1de095628921d6596eb8a69f91b3405961c4d5e4c9f83b343be3274948a3f\"" Jun 25 16:19:15.797570 containerd[1287]: time="2024-06-25T16:19:15.797547668Z" level=info msg="StartContainer for \"6ef1de095628921d6596eb8a69f91b3405961c4d5e4c9f83b343be3274948a3f\"" Jun 25 16:19:15.800211 containerd[1287]: time="2024-06-25T16:19:15.800150413Z" level=info msg="CreateContainer within sandbox \"9e04dbc9cf9c5e59cf6d0047242ac38b952b680a3af6bb836052f8d8ece0e2f4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d6fe8c009f9731b6554de0eae97905dcf3802f0bef0cc0f741a9613f92ce878b\"" Jun 25 16:19:15.801445 containerd[1287]: time="2024-06-25T16:19:15.801422004Z" level=info msg="StartContainer for \"d6fe8c009f9731b6554de0eae97905dcf3802f0bef0cc0f741a9613f92ce878b\"" Jun 25 16:19:15.866751 systemd[1]: Started cri-containerd-6ef1de095628921d6596eb8a69f91b3405961c4d5e4c9f83b343be3274948a3f.scope - libcontainer container 6ef1de095628921d6596eb8a69f91b3405961c4d5e4c9f83b343be3274948a3f. Jun 25 16:19:15.870094 systemd[1]: Started cri-containerd-d6fe8c009f9731b6554de0eae97905dcf3802f0bef0cc0f741a9613f92ce878b.scope - libcontainer container d6fe8c009f9731b6554de0eae97905dcf3802f0bef0cc0f741a9613f92ce878b. Jun 25 16:19:15.871476 kubelet[1922]: W0625 16:19:15.871398 1922 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jun 25 16:19:15.871546 kubelet[1922]: E0625 16:19:15.871492 1922 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Jun 25 16:19:15.879000 audit: BPF prog-id=74 op=LOAD Jun 25 16:19:15.879000 audit: BPF prog-id=75 op=LOAD Jun 25 16:19:15.879000 audit[2143]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=1995 pid=2143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:15.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665663164653039353632383932316436353936656238613639663931 Jun 25 16:19:15.880000 audit: BPF prog-id=76 op=LOAD Jun 25 16:19:15.880000 audit[2143]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=1995 pid=2143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:15.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665663164653039353632383932316436353936656238613639663931 Jun 25 16:19:15.880000 audit: BPF prog-id=76 op=UNLOAD Jun 25 16:19:15.880000 audit: BPF prog-id=75 op=UNLOAD Jun 25 16:19:15.880000 audit: BPF prog-id=77 op=LOAD Jun 25 16:19:15.880000 audit[2143]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=1995 pid=2143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:15.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665663164653039353632383932316436353936656238613639663931 Jun 25 16:19:15.882000 audit: BPF prog-id=78 op=LOAD Jun 25 16:19:15.883000 audit: BPF prog-id=79 op=LOAD Jun 25 16:19:15.883000 audit[2144]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=1979 pid=2144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:15.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436666538633030396639373331623635353464653065616539373930 Jun 25 16:19:15.883000 audit: BPF prog-id=80 op=LOAD Jun 25 16:19:15.883000 audit[2144]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=1979 pid=2144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:15.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436666538633030396639373331623635353464653065616539373930 Jun 25 16:19:15.883000 audit: BPF prog-id=80 op=UNLOAD Jun 25 16:19:15.883000 audit: BPF prog-id=79 op=UNLOAD Jun 25 16:19:15.883000 audit: BPF prog-id=81 op=LOAD Jun 25 16:19:15.883000 audit[2144]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=1979 pid=2144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:15.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436666538633030396639373331623635353464653065616539373930 Jun 25 16:19:16.077720 containerd[1287]: time="2024-06-25T16:19:16.077656052Z" level=info msg="StartContainer for \"6ef1de095628921d6596eb8a69f91b3405961c4d5e4c9f83b343be3274948a3f\" returns successfully" Jun 25 16:19:16.078023 containerd[1287]: time="2024-06-25T16:19:16.077902974Z" level=info msg="StartContainer for \"d6fe8c009f9731b6554de0eae97905dcf3802f0bef0cc0f741a9613f92ce878b\" returns successfully" Jun 25 16:19:16.318000 audit[2107]: AVC avc: denied { watch } for pid=2107 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6275 scontext=system_u:system_r:container_t:s0:c382,c828 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:19:16.319000 audit[2107]: AVC avc: denied { watch } for pid=2107 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6269 scontext=system_u:system_r:container_t:s0:c382,c828 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:19:16.318000 audit[2107]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=8 a1=c000b3c030 a2=fc6 a3=0 items=0 ppid=1985 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c382,c828 key=(null) Jun 25 16:19:16.319000 audit[2107]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000188520 a2=fc6 a3=0 items=0 ppid=1985 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c382,c828 key=(null) Jun 25 16:19:16.319000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:19:16.318000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:19:16.630584 kubelet[1922]: E0625 16:19:16.630447 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:16.631253 kubelet[1922]: E0625 16:19:16.631229 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:16.633140 kubelet[1922]: E0625 16:19:16.633120 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:17.613000 audit[2169]: AVC avc: denied { watch } for pid=2169 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6275 scontext=system_u:system_r:container_t:s0:c79,c171 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:19:17.613000 audit[2169]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=40 a1=c00435c4e0 a2=fc6 a3=0 items=0 ppid=1979 pid=2169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c79,c171 key=(null) Jun 25 16:19:17.613000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3631002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 16:19:17.613000 audit[2169]: AVC avc: denied { watch } for pid=2169 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=6271 scontext=system_u:system_r:container_t:s0:c79,c171 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:19:17.613000 audit[2169]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=40 a1=c00435c540 a2=fc6 a3=0 items=0 ppid=1979 pid=2169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c79,c171 key=(null) Jun 25 16:19:17.613000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3631002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 16:19:17.628465 kernel: kauditd_printk_skb: 77 callbacks suppressed Jun 25 16:19:17.628615 kernel: audit: type=1400 audit(1719332357.619:326): avc: denied { watch } for pid=2169 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=6277 scontext=system_u:system_r:container_t:s0:c79,c171 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:19:17.628646 kernel: audit: type=1300 audit(1719332357.619:326): arch=c000003e syscall=254 success=no exit=-13 a0=4b a1=c00435dc50 a2=fc6 a3=0 items=0 ppid=1979 pid=2169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c79,c171 key=(null) Jun 25 16:19:17.619000 audit[2169]: AVC avc: denied { watch } for pid=2169 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=6277 scontext=system_u:system_r:container_t:s0:c79,c171 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:19:17.619000 audit[2169]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=4b a1=c00435dc50 a2=fc6 a3=0 items=0 ppid=1979 pid=2169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c79,c171 key=(null) Jun 25 16:19:17.632314 kernel: audit: type=1327 audit(1719332357.619:326): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3631002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 16:19:17.619000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3631002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 16:19:17.620000 audit[2169]: AVC avc: denied { watch } for pid=2169 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6269 scontext=system_u:system_r:container_t:s0:c79,c171 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:19:17.637233 kernel: audit: type=1400 audit(1719332357.620:327): avc: denied { watch } for pid=2169 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6269 scontext=system_u:system_r:container_t:s0:c79,c171 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:19:17.637665 kubelet[1922]: E0625 16:19:17.637631 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:17.620000 audit[2169]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=4b a1=c0044972e0 a2=fc6 a3=0 items=0 ppid=1979 pid=2169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c79,c171 key=(null) Jun 25 16:19:17.641892 kubelet[1922]: E0625 16:19:17.638293 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:17.641892 kubelet[1922]: E0625 16:19:17.638759 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:17.642234 kernel: audit: type=1300 audit(1719332357.620:327): arch=c000003e syscall=254 success=no exit=-13 a0=4b a1=c0044972e0 a2=fc6 a3=0 items=0 ppid=1979 pid=2169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c79,c171 key=(null) Jun 25 16:19:17.620000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3631002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 16:19:17.652748 kernel: audit: type=1327 audit(1719332357.620:327): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3631002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 16:19:17.652816 kernel: audit: type=1400 audit(1719332357.640:328): avc: denied { watch } for pid=2169 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6269 scontext=system_u:system_r:container_t:s0:c79,c171 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:19:17.640000 audit[2169]: AVC avc: denied { watch } for pid=2169 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6269 scontext=system_u:system_r:container_t:s0:c79,c171 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:19:17.656603 kernel: audit: type=1300 audit(1719332357.640:328): arch=c000003e syscall=254 success=no exit=-13 a0=47 a1=c0094ff440 a2=fc6 a3=0 items=0 ppid=1979 pid=2169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c79,c171 key=(null) Jun 25 16:19:17.640000 audit[2169]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=47 a1=c0094ff440 a2=fc6 a3=0 items=0 ppid=1979 pid=2169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c79,c171 key=(null) Jun 25 16:19:17.640000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3631002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 16:19:17.667445 kernel: audit: type=1327 audit(1719332357.640:328): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3631002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 16:19:17.667617 kernel: audit: type=1400 audit(1719332357.640:329): avc: denied { watch } for pid=2169 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6275 scontext=system_u:system_r:container_t:s0:c79,c171 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:19:17.640000 audit[2169]: AVC avc: denied { watch } for pid=2169 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6275 scontext=system_u:system_r:container_t:s0:c79,c171 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:19:17.640000 audit[2169]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=47 a1=c005c64840 a2=fc6 a3=0 items=0 ppid=1979 pid=2169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c79,c171 key=(null) Jun 25 16:19:17.640000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3631002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 16:19:18.252996 kubelet[1922]: E0625 16:19:18.252938 1922 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jun 25 16:19:18.562935 kubelet[1922]: I0625 16:19:18.562779 1922 apiserver.go:52] "Watching apiserver" Jun 25 16:19:18.595623 kubelet[1922]: I0625 16:19:18.595570 1922 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 16:19:18.615577 kubelet[1922]: E0625 16:19:18.615541 1922 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jun 25 16:19:18.637364 kubelet[1922]: E0625 16:19:18.637335 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:18.637720 kubelet[1922]: E0625 16:19:18.637689 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:18.818317 kubelet[1922]: E0625 16:19:18.818197 1922 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jun 25 16:19:18.909840 kubelet[1922]: I0625 16:19:18.909809 1922 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 16:19:18.941279 kubelet[1922]: I0625 16:19:18.941212 1922 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jun 25 16:19:19.343141 update_engine[1273]: I0625 16:19:19.343061 1273 update_attempter.cc:509] Updating boot flags... Jun 25 16:19:19.413251 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2211) Jun 25 16:19:19.453255 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2215) Jun 25 16:19:19.898523 kubelet[1922]: E0625 16:19:19.898458 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:20.640287 kubelet[1922]: E0625 16:19:20.640216 1922 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:22.975315 systemd[1]: Reloading. Jun 25 16:19:23.109955 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 16:19:23.187914 kernel: kauditd_printk_skb: 2 callbacks suppressed Jun 25 16:19:23.188014 kernel: audit: type=1334 audit(1719332363.176:330): prog-id=82 op=LOAD Jun 25 16:19:23.188041 kernel: audit: type=1334 audit(1719332363.176:331): prog-id=44 op=UNLOAD Jun 25 16:19:23.188062 kernel: audit: type=1334 audit(1719332363.177:332): prog-id=83 op=LOAD Jun 25 16:19:23.188115 kernel: audit: type=1334 audit(1719332363.177:333): prog-id=66 op=UNLOAD Jun 25 16:19:23.188137 kernel: audit: type=1334 audit(1719332363.177:334): prog-id=84 op=LOAD Jun 25 16:19:23.188164 kernel: audit: type=1334 audit(1719332363.177:335): prog-id=45 op=UNLOAD Jun 25 16:19:23.188185 kernel: audit: type=1334 audit(1719332363.178:336): prog-id=85 op=LOAD Jun 25 16:19:23.188205 kernel: audit: type=1334 audit(1719332363.178:337): prog-id=86 op=LOAD Jun 25 16:19:23.188251 kernel: audit: type=1334 audit(1719332363.178:338): prog-id=46 op=UNLOAD Jun 25 16:19:23.176000 audit: BPF prog-id=82 op=LOAD Jun 25 16:19:23.176000 audit: BPF prog-id=44 op=UNLOAD Jun 25 16:19:23.177000 audit: BPF prog-id=83 op=LOAD Jun 25 16:19:23.177000 audit: BPF prog-id=66 op=UNLOAD Jun 25 16:19:23.177000 audit: BPF prog-id=84 op=LOAD Jun 25 16:19:23.177000 audit: BPF prog-id=45 op=UNLOAD Jun 25 16:19:23.178000 audit: BPF prog-id=85 op=LOAD Jun 25 16:19:23.178000 audit: BPF prog-id=86 op=LOAD Jun 25 16:19:23.178000 audit: BPF prog-id=46 op=UNLOAD Jun 25 16:19:23.178000 audit: BPF prog-id=47 op=UNLOAD Jun 25 16:19:23.179000 audit: BPF prog-id=87 op=LOAD Jun 25 16:19:23.179000 audit: BPF prog-id=78 op=UNLOAD Jun 25 16:19:23.180000 audit: BPF prog-id=88 op=LOAD Jun 25 16:19:23.180000 audit: BPF prog-id=89 op=LOAD Jun 25 16:19:23.180000 audit: BPF prog-id=48 op=UNLOAD Jun 25 16:19:23.180000 audit: BPF prog-id=49 op=UNLOAD Jun 25 16:19:23.181000 audit: BPF prog-id=90 op=LOAD Jun 25 16:19:23.181000 audit: BPF prog-id=62 op=UNLOAD Jun 25 16:19:23.183000 audit: BPF prog-id=91 op=LOAD Jun 25 16:19:23.183000 audit: BPF prog-id=70 op=UNLOAD Jun 25 16:19:23.184000 audit: BPF prog-id=92 op=LOAD Jun 25 16:19:23.184000 audit: BPF prog-id=50 op=UNLOAD Jun 25 16:19:23.185000 audit: BPF prog-id=93 op=LOAD Jun 25 16:19:23.185000 audit: BPF prog-id=51 op=UNLOAD Jun 25 16:19:23.186000 audit: BPF prog-id=94 op=LOAD Jun 25 16:19:23.186000 audit: BPF prog-id=52 op=UNLOAD Jun 25 16:19:23.186000 audit: BPF prog-id=95 op=LOAD Jun 25 16:19:23.186000 audit: BPF prog-id=96 op=LOAD Jun 25 16:19:23.186000 audit: BPF prog-id=53 op=UNLOAD Jun 25 16:19:23.186000 audit: BPF prog-id=54 op=UNLOAD Jun 25 16:19:23.187000 audit: BPF prog-id=97 op=LOAD Jun 25 16:19:23.187000 audit: BPF prog-id=74 op=UNLOAD Jun 25 16:19:23.188000 audit: BPF prog-id=98 op=LOAD Jun 25 16:19:23.188000 audit: BPF prog-id=55 op=UNLOAD Jun 25 16:19:23.188000 audit: BPF prog-id=99 op=LOAD Jun 25 16:19:23.188000 audit: BPF prog-id=100 op=LOAD Jun 25 16:19:23.188000 audit: BPF prog-id=56 op=UNLOAD Jun 25 16:19:23.188000 audit: BPF prog-id=57 op=UNLOAD Jun 25 16:19:23.190251 kernel: audit: type=1334 audit(1719332363.178:339): prog-id=47 op=UNLOAD Jun 25 16:19:23.189000 audit: BPF prog-id=101 op=LOAD Jun 25 16:19:23.189000 audit: BPF prog-id=58 op=UNLOAD Jun 25 16:19:23.202269 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:19:23.229721 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 16:19:23.230156 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:19:23.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:23.238252 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 16:19:23.335690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 16:19:23.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:23.376984 kubelet[2281]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:19:23.376984 kubelet[2281]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 16:19:23.376984 kubelet[2281]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 16:19:23.377436 kubelet[2281]: I0625 16:19:23.377014 2281 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 16:19:23.380803 kubelet[2281]: I0625 16:19:23.380774 2281 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 16:19:23.380803 kubelet[2281]: I0625 16:19:23.380798 2281 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 16:19:23.380988 kubelet[2281]: I0625 16:19:23.380973 2281 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 16:19:23.382158 kubelet[2281]: I0625 16:19:23.382144 2281 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 16:19:23.383051 kubelet[2281]: I0625 16:19:23.383024 2281 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 16:19:23.390426 kubelet[2281]: I0625 16:19:23.390372 2281 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 16:19:23.390629 kubelet[2281]: I0625 16:19:23.390611 2281 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 16:19:23.390801 kubelet[2281]: I0625 16:19:23.390778 2281 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 16:19:23.390801 kubelet[2281]: I0625 16:19:23.390798 2281 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 16:19:23.390927 kubelet[2281]: I0625 16:19:23.390806 2281 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 16:19:23.390927 kubelet[2281]: I0625 16:19:23.390848 2281 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:19:23.391003 kubelet[2281]: I0625 16:19:23.390927 2281 kubelet.go:393] "Attempting to sync node with API server" Jun 25 16:19:23.391003 kubelet[2281]: I0625 16:19:23.390941 2281 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 16:19:23.391003 kubelet[2281]: I0625 16:19:23.390965 2281 kubelet.go:309] "Adding apiserver pod source" Jun 25 16:19:23.391003 kubelet[2281]: I0625 16:19:23.390979 2281 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 16:19:23.391642 kubelet[2281]: I0625 16:19:23.391629 2281 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 16:19:23.399130 kubelet[2281]: I0625 16:19:23.392257 2281 server.go:1232] "Started kubelet" Jun 25 16:19:23.399130 kubelet[2281]: I0625 16:19:23.392431 2281 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 16:19:23.399130 kubelet[2281]: I0625 16:19:23.392538 2281 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 16:19:23.399130 kubelet[2281]: I0625 16:19:23.392810 2281 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 16:19:23.399130 kubelet[2281]: I0625 16:19:23.393098 2281 server.go:462] "Adding debug handlers to kubelet server" Jun 25 16:19:23.399130 kubelet[2281]: I0625 16:19:23.394343 2281 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 16:19:23.399130 kubelet[2281]: I0625 16:19:23.396474 2281 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 16:19:23.399130 kubelet[2281]: I0625 16:19:23.398017 2281 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 16:19:23.399130 kubelet[2281]: I0625 16:19:23.398233 2281 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 16:19:23.401035 kubelet[2281]: E0625 16:19:23.400784 2281 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 16:19:23.401035 kubelet[2281]: E0625 16:19:23.400825 2281 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 16:19:23.413755 kubelet[2281]: I0625 16:19:23.413026 2281 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 16:19:23.416280 kubelet[2281]: I0625 16:19:23.416204 2281 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 16:19:23.416280 kubelet[2281]: I0625 16:19:23.416256 2281 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 16:19:23.416280 kubelet[2281]: I0625 16:19:23.416278 2281 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 16:19:23.416581 kubelet[2281]: E0625 16:19:23.416550 2281 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 16:19:23.444666 kubelet[2281]: I0625 16:19:23.444631 2281 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 16:19:23.444666 kubelet[2281]: I0625 16:19:23.444650 2281 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 16:19:23.444666 kubelet[2281]: I0625 16:19:23.444666 2281 state_mem.go:36] "Initialized new in-memory state store" Jun 25 16:19:23.444954 kubelet[2281]: I0625 16:19:23.444810 2281 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 16:19:23.444954 kubelet[2281]: I0625 16:19:23.444828 2281 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 16:19:23.444954 kubelet[2281]: I0625 16:19:23.444834 2281 policy_none.go:49] "None policy: Start" Jun 25 16:19:23.445622 kubelet[2281]: I0625 16:19:23.445575 2281 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 16:19:23.445622 kubelet[2281]: I0625 16:19:23.445621 2281 state_mem.go:35] "Initializing new in-memory state store" Jun 25 16:19:23.445840 kubelet[2281]: I0625 16:19:23.445824 2281 state_mem.go:75] "Updated machine memory state" Jun 25 16:19:23.450008 kubelet[2281]: I0625 16:19:23.449975 2281 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 16:19:23.450377 kubelet[2281]: I0625 16:19:23.450175 2281 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 16:19:23.499923 kubelet[2281]: I0625 16:19:23.499823 2281 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jun 25 16:19:23.517341 kubelet[2281]: I0625 16:19:23.517284 2281 topology_manager.go:215] "Topology Admit Handler" podUID="58a8406cbcc5d1f466b3c9fc9cd923ee" podNamespace="kube-system" podName="kube-apiserver-localhost" Jun 25 16:19:23.517538 kubelet[2281]: I0625 16:19:23.517432 2281 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jun 25 16:19:23.517538 kubelet[2281]: I0625 16:19:23.517508 2281 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jun 25 16:19:23.588901 kubelet[2281]: E0625 16:19:23.588845 2281 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jun 25 16:19:23.623154 kubelet[2281]: I0625 16:19:23.623115 2281 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Jun 25 16:19:23.623357 kubelet[2281]: I0625 16:19:23.623239 2281 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jun 25 16:19:23.701736 kubelet[2281]: I0625 16:19:23.701661 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/58a8406cbcc5d1f466b3c9fc9cd923ee-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"58a8406cbcc5d1f466b3c9fc9cd923ee\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:19:23.701736 kubelet[2281]: I0625 16:19:23.701723 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:19:23.702001 kubelet[2281]: I0625 16:19:23.701811 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:19:23.702001 kubelet[2281]: I0625 16:19:23.701888 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jun 25 16:19:23.702001 kubelet[2281]: I0625 16:19:23.701915 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/58a8406cbcc5d1f466b3c9fc9cd923ee-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"58a8406cbcc5d1f466b3c9fc9cd923ee\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:19:23.702001 kubelet[2281]: I0625 16:19:23.701943 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:19:23.702001 kubelet[2281]: I0625 16:19:23.701965 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:19:23.702105 kubelet[2281]: I0625 16:19:23.701988 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jun 25 16:19:23.702105 kubelet[2281]: I0625 16:19:23.702012 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/58a8406cbcc5d1f466b3c9fc9cd923ee-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"58a8406cbcc5d1f466b3c9fc9cd923ee\") " pod="kube-system/kube-apiserver-localhost" Jun 25 16:19:23.856027 kubelet[2281]: E0625 16:19:23.855982 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:23.856206 kubelet[2281]: E0625 16:19:23.856053 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:23.890336 kubelet[2281]: E0625 16:19:23.890288 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:24.391707 kubelet[2281]: I0625 16:19:24.391653 2281 apiserver.go:52] "Watching apiserver" Jun 25 16:19:24.399107 kubelet[2281]: I0625 16:19:24.399072 2281 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 16:19:24.427131 kubelet[2281]: E0625 16:19:24.427103 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:24.776775 kubelet[2281]: E0625 16:19:24.776650 2281 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 25 16:19:24.777520 kubelet[2281]: E0625 16:19:24.777505 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:24.779016 kubelet[2281]: E0625 16:19:24.778996 2281 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jun 25 16:19:24.779337 kubelet[2281]: E0625 16:19:24.779323 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:25.308390 kubelet[2281]: I0625 16:19:25.308332 2281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.308273164 podCreationTimestamp="2024-06-25 16:19:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:19:25.30805471 +0000 UTC m=+1.969098562" watchObservedRunningTime="2024-06-25 16:19:25.308273164 +0000 UTC m=+1.969317026" Jun 25 16:19:25.308590 kubelet[2281]: I0625 16:19:25.308451 2281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.308430151 podCreationTimestamp="2024-06-25 16:19:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:19:24.795621346 +0000 UTC m=+1.456665198" watchObservedRunningTime="2024-06-25 16:19:25.308430151 +0000 UTC m=+1.969474003" Jun 25 16:19:25.427812 kubelet[2281]: E0625 16:19:25.427773 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:25.428149 kubelet[2281]: E0625 16:19:25.428123 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:26.429426 kubelet[2281]: E0625 16:19:26.429361 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:26.530000 audit[2107]: AVC avc: denied { watch } for pid=2107 comm="kube-controller" path="/opt/libexec/kubernetes/kubelet-plugins/volume/exec" dev="vda9" ino=520979 scontext=system_u:system_r:container_t:s0:c382,c828 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0 Jun 25 16:19:26.530000 audit[2107]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=8 a1=c000d54c00 a2=fc6 a3=0 items=0 ppid=1985 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c382,c828 key=(null) Jun 25 16:19:26.530000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:19:30.451556 kubelet[2281]: E0625 16:19:30.451511 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:31.204091 sudo[1427]: pam_unix(sudo:session): session closed for user root Jun 25 16:19:31.202000 audit[1427]: USER_END pid=1427 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.204931 kernel: kauditd_printk_skb: 35 callbacks suppressed Jun 25 16:19:31.204970 kernel: audit: type=1106 audit(1719332371.202:373): pid=1427 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.205975 sshd[1424]: pam_unix(sshd:session): session closed for user core Jun 25 16:19:31.202000 audit[1427]: CRED_DISP pid=1427 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.209700 systemd[1]: sshd@6-10.0.0.61:22-10.0.0.1:43812.service: Deactivated successfully. Jun 25 16:19:31.210632 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 16:19:31.210803 systemd[1]: session-7.scope: Consumed 4.581s CPU time. Jun 25 16:19:31.211273 systemd-logind[1271]: Session 7 logged out. Waiting for processes to exit. Jun 25 16:19:31.211565 kernel: audit: type=1104 audit(1719332371.202:374): pid=1427 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.211611 kernel: audit: type=1106 audit(1719332371.204:375): pid=1424 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:19:31.204000 audit[1424]: USER_END pid=1424 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:19:31.212061 systemd-logind[1271]: Removed session 7. Jun 25 16:19:31.204000 audit[1424]: CRED_DISP pid=1424 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:19:31.218345 kernel: audit: type=1104 audit(1719332371.204:376): pid=1424 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:19:31.218392 kernel: audit: type=1131 audit(1719332371.207:377): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.61:22-10.0.0.1:43812 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.61:22-10.0.0.1:43812 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:31.436897 kubelet[2281]: E0625 16:19:31.436862 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:31.815322 kubelet[2281]: E0625 16:19:31.815268 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:32.436966 kubelet[2281]: E0625 16:19:32.436821 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:32.437377 kubelet[2281]: E0625 16:19:32.437210 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:32.437897 kubelet[2281]: E0625 16:19:32.437873 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:33.439164 kubelet[2281]: E0625 16:19:33.439132 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:37.271000 audit[2107]: AVC avc: denied { watch } for pid=2107 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6269 scontext=system_u:system_r:container_t:s0:c382,c828 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:19:37.271000 audit[2107]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00094f6e0 a2=fc6 a3=0 items=0 ppid=1985 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c382,c828 key=(null) Jun 25 16:19:37.281626 kernel: audit: type=1400 audit(1719332377.271:378): avc: denied { watch } for pid=2107 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6269 scontext=system_u:system_r:container_t:s0:c382,c828 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:19:37.281793 kernel: audit: type=1300 audit(1719332377.271:378): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00094f6e0 a2=fc6 a3=0 items=0 ppid=1985 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c382,c828 key=(null) Jun 25 16:19:37.281823 kernel: audit: type=1327 audit(1719332377.271:378): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:19:37.271000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:19:37.272000 audit[2107]: AVC avc: denied { watch } for pid=2107 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6269 scontext=system_u:system_r:container_t:s0:c382,c828 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:19:37.289652 kernel: audit: type=1400 audit(1719332377.272:379): avc: denied { watch } for pid=2107 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6269 scontext=system_u:system_r:container_t:s0:c382,c828 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:19:37.272000 audit[2107]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000d9a720 a2=fc6 a3=0 items=0 ppid=1985 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c382,c828 key=(null) Jun 25 16:19:37.294763 kernel: audit: type=1300 audit(1719332377.272:379): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000d9a720 a2=fc6 a3=0 items=0 ppid=1985 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c382,c828 key=(null) Jun 25 16:19:37.294953 kernel: audit: type=1327 audit(1719332377.272:379): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:19:37.272000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:19:37.273000 audit[2107]: AVC avc: denied { watch } for pid=2107 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6269 scontext=system_u:system_r:container_t:s0:c382,c828 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:19:37.301953 kernel: audit: type=1400 audit(1719332377.273:380): avc: denied { watch } for pid=2107 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6269 scontext=system_u:system_r:container_t:s0:c382,c828 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:19:37.302113 kernel: audit: type=1300 audit(1719332377.273:380): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000d9a980 a2=fc6 a3=0 items=0 ppid=1985 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c382,c828 key=(null) Jun 25 16:19:37.273000 audit[2107]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000d9a980 a2=fc6 a3=0 items=0 ppid=1985 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c382,c828 key=(null) Jun 25 16:19:37.273000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:19:37.309290 kernel: audit: type=1327 audit(1719332377.273:380): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:19:37.309437 kernel: audit: type=1400 audit(1719332377.273:381): avc: denied { watch } for pid=2107 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6269 scontext=system_u:system_r:container_t:s0:c382,c828 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:19:37.273000 audit[2107]: AVC avc: denied { watch } for pid=2107 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6269 scontext=system_u:system_r:container_t:s0:c382,c828 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:19:37.273000 audit[2107]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000b7d160 a2=fc6 a3=0 items=0 ppid=1985 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c382,c828 key=(null) Jun 25 16:19:37.273000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:19:38.246000 kubelet[2281]: I0625 16:19:38.245953 2281 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 16:19:38.246547 containerd[1287]: time="2024-06-25T16:19:38.246485694Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 16:19:38.246789 kubelet[2281]: I0625 16:19:38.246696 2281 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 16:19:38.824741 kubelet[2281]: I0625 16:19:38.824695 2281 topology_manager.go:215] "Topology Admit Handler" podUID="0fd218a9-5829-4b4f-97a1-f93a3ef3140a" podNamespace="kube-system" podName="kube-proxy-2bcg5" Jun 25 16:19:38.830114 systemd[1]: Created slice kubepods-besteffort-pod0fd218a9_5829_4b4f_97a1_f93a3ef3140a.slice - libcontainer container kubepods-besteffort-pod0fd218a9_5829_4b4f_97a1_f93a3ef3140a.slice. Jun 25 16:19:38.901121 kubelet[2281]: I0625 16:19:38.901058 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npv9h\" (UniqueName: \"kubernetes.io/projected/0fd218a9-5829-4b4f-97a1-f93a3ef3140a-kube-api-access-npv9h\") pod \"kube-proxy-2bcg5\" (UID: \"0fd218a9-5829-4b4f-97a1-f93a3ef3140a\") " pod="kube-system/kube-proxy-2bcg5" Jun 25 16:19:38.901121 kubelet[2281]: I0625 16:19:38.901131 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0fd218a9-5829-4b4f-97a1-f93a3ef3140a-kube-proxy\") pod \"kube-proxy-2bcg5\" (UID: \"0fd218a9-5829-4b4f-97a1-f93a3ef3140a\") " pod="kube-system/kube-proxy-2bcg5" Jun 25 16:19:38.901388 kubelet[2281]: I0625 16:19:38.901184 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0fd218a9-5829-4b4f-97a1-f93a3ef3140a-xtables-lock\") pod \"kube-proxy-2bcg5\" (UID: \"0fd218a9-5829-4b4f-97a1-f93a3ef3140a\") " pod="kube-system/kube-proxy-2bcg5" Jun 25 16:19:38.901388 kubelet[2281]: I0625 16:19:38.901204 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0fd218a9-5829-4b4f-97a1-f93a3ef3140a-lib-modules\") pod \"kube-proxy-2bcg5\" (UID: \"0fd218a9-5829-4b4f-97a1-f93a3ef3140a\") " pod="kube-system/kube-proxy-2bcg5" Jun 25 16:19:39.139190 kubelet[2281]: E0625 16:19:39.139051 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:39.139742 containerd[1287]: time="2024-06-25T16:19:39.139691447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2bcg5,Uid:0fd218a9-5829-4b4f-97a1-f93a3ef3140a,Namespace:kube-system,Attempt:0,}" Jun 25 16:19:39.409934 kubelet[2281]: I0625 16:19:39.409810 2281 topology_manager.go:215] "Topology Admit Handler" podUID="422db163-f400-4cc3-9c34-3724d685be5c" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-7px8z" Jun 25 16:19:39.416069 systemd[1]: Created slice kubepods-besteffort-pod422db163_f400_4cc3_9c34_3724d685be5c.slice - libcontainer container kubepods-besteffort-pod422db163_f400_4cc3_9c34_3724d685be5c.slice. Jun 25 16:19:39.505103 kubelet[2281]: I0625 16:19:39.505057 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dxlx\" (UniqueName: \"kubernetes.io/projected/422db163-f400-4cc3-9c34-3724d685be5c-kube-api-access-7dxlx\") pod \"tigera-operator-76c4974c85-7px8z\" (UID: \"422db163-f400-4cc3-9c34-3724d685be5c\") " pod="tigera-operator/tigera-operator-76c4974c85-7px8z" Jun 25 16:19:39.505103 kubelet[2281]: I0625 16:19:39.505108 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/422db163-f400-4cc3-9c34-3724d685be5c-var-lib-calico\") pod \"tigera-operator-76c4974c85-7px8z\" (UID: \"422db163-f400-4cc3-9c34-3724d685be5c\") " pod="tigera-operator/tigera-operator-76c4974c85-7px8z" Jun 25 16:19:39.572760 containerd[1287]: time="2024-06-25T16:19:39.572582865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:19:39.572760 containerd[1287]: time="2024-06-25T16:19:39.572667354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:19:39.572760 containerd[1287]: time="2024-06-25T16:19:39.572692001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:19:39.572760 containerd[1287]: time="2024-06-25T16:19:39.572709824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:19:39.601554 systemd[1]: Started cri-containerd-ad1546744c9d6c27929656db1d88e210efba0f0ff19cb3847708ef5f43f6a5b3.scope - libcontainer container ad1546744c9d6c27929656db1d88e210efba0f0ff19cb3847708ef5f43f6a5b3. Jun 25 16:19:39.609000 audit: BPF prog-id=102 op=LOAD Jun 25 16:19:39.609000 audit: BPF prog-id=103 op=LOAD Jun 25 16:19:39.609000 audit[2384]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2374 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:39.609000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164313534363734346339643663323739323936353664623164383865 Jun 25 16:19:39.609000 audit: BPF prog-id=104 op=LOAD Jun 25 16:19:39.609000 audit[2384]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2374 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:39.609000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164313534363734346339643663323739323936353664623164383865 Jun 25 16:19:39.609000 audit: BPF prog-id=104 op=UNLOAD Jun 25 16:19:39.609000 audit: BPF prog-id=103 op=UNLOAD Jun 25 16:19:39.609000 audit: BPF prog-id=105 op=LOAD Jun 25 16:19:39.609000 audit[2384]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2374 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:39.609000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6164313534363734346339643663323739323936353664623164383865 Jun 25 16:19:39.620460 containerd[1287]: time="2024-06-25T16:19:39.620407854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2bcg5,Uid:0fd218a9-5829-4b4f-97a1-f93a3ef3140a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad1546744c9d6c27929656db1d88e210efba0f0ff19cb3847708ef5f43f6a5b3\"" Jun 25 16:19:39.621304 kubelet[2281]: E0625 16:19:39.621103 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:39.622863 containerd[1287]: time="2024-06-25T16:19:39.622837398Z" level=info msg="CreateContainer within sandbox \"ad1546744c9d6c27929656db1d88e210efba0f0ff19cb3847708ef5f43f6a5b3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 16:19:39.994893 containerd[1287]: time="2024-06-25T16:19:39.994816031Z" level=info msg="CreateContainer within sandbox \"ad1546744c9d6c27929656db1d88e210efba0f0ff19cb3847708ef5f43f6a5b3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"04821ac4fbc33fbf05e82e296499ebf6f1a43685d864bd4171b2eeef91f1dc85\"" Jun 25 16:19:39.995528 containerd[1287]: time="2024-06-25T16:19:39.995353413Z" level=info msg="StartContainer for \"04821ac4fbc33fbf05e82e296499ebf6f1a43685d864bd4171b2eeef91f1dc85\"" Jun 25 16:19:40.019098 containerd[1287]: time="2024-06-25T16:19:40.019046676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-7px8z,Uid:422db163-f400-4cc3-9c34-3724d685be5c,Namespace:tigera-operator,Attempt:0,}" Jun 25 16:19:40.019412 systemd[1]: Started cri-containerd-04821ac4fbc33fbf05e82e296499ebf6f1a43685d864bd4171b2eeef91f1dc85.scope - libcontainer container 04821ac4fbc33fbf05e82e296499ebf6f1a43685d864bd4171b2eeef91f1dc85. Jun 25 16:19:40.029000 audit: BPF prog-id=106 op=LOAD Jun 25 16:19:40.029000 audit[2416]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2374 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034383231616334666263333366626630356538326532393634393965 Jun 25 16:19:40.029000 audit: BPF prog-id=107 op=LOAD Jun 25 16:19:40.029000 audit[2416]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2374 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.029000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034383231616334666263333366626630356538326532393634393965 Jun 25 16:19:40.030000 audit: BPF prog-id=107 op=UNLOAD Jun 25 16:19:40.030000 audit: BPF prog-id=106 op=UNLOAD Jun 25 16:19:40.030000 audit: BPF prog-id=108 op=LOAD Jun 25 16:19:40.030000 audit[2416]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2374 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.030000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034383231616334666263333366626630356538326532393634393965 Jun 25 16:19:40.052593 systemd[1]: run-containerd-runc-k8s.io-ad1546744c9d6c27929656db1d88e210efba0f0ff19cb3847708ef5f43f6a5b3-runc.oIk7wB.mount: Deactivated successfully. Jun 25 16:19:40.098000 audit[2469]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2469 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:40.098000 audit[2469]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe766ddac0 a2=0 a3=7ffe766ddaac items=0 ppid=2426 pid=2469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.098000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:19:40.099000 audit[2470]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2470 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:19:40.099000 audit[2470]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc3a68b1b0 a2=0 a3=7ffc3a68b19c items=0 ppid=2426 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.099000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 16:19:40.099000 audit[2471]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2471 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:40.099000 audit[2471]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffffa9f40d0 a2=0 a3=7ffffa9f40bc items=0 ppid=2426 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.099000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:19:40.100000 audit[2472]: NETFILTER_CFG table=nat:41 family=10 entries=1 op=nft_register_chain pid=2472 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:19:40.100000 audit[2472]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe6b12fcc0 a2=0 a3=7ffe6b12fcac items=0 ppid=2426 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.100000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 16:19:40.100000 audit[2473]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_chain pid=2473 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:40.100000 audit[2473]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff7f768800 a2=0 a3=7fff7f7687ec items=0 ppid=2426 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.100000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 16:19:40.102000 audit[2474]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2474 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:19:40.102000 audit[2474]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc30a60530 a2=0 a3=7ffc30a6051c items=0 ppid=2426 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.102000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 16:19:40.112854 containerd[1287]: time="2024-06-25T16:19:40.112787713Z" level=info msg="StartContainer for \"04821ac4fbc33fbf05e82e296499ebf6f1a43685d864bd4171b2eeef91f1dc85\" returns successfully" Jun 25 16:19:40.200000 audit[2475]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2475 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:40.200000 audit[2475]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc09547180 a2=0 a3=7ffc0954716c items=0 ppid=2426 pid=2475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.200000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 16:19:40.203000 audit[2477]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2477 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:40.203000 audit[2477]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffde3e86e90 a2=0 a3=7ffde3e86e7c items=0 ppid=2426 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.203000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jun 25 16:19:40.206000 audit[2480]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2480 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:40.206000 audit[2480]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe24b0c750 a2=0 a3=7ffe24b0c73c items=0 ppid=2426 pid=2480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.206000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jun 25 16:19:40.207000 audit[2481]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2481 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:40.207000 audit[2481]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdcb9bf690 a2=0 a3=7ffdcb9bf67c items=0 ppid=2426 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.207000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 16:19:40.210000 audit[2483]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2483 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:40.210000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd250587e0 a2=0 a3=7ffd250587cc items=0 ppid=2426 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.210000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 16:19:40.211000 audit[2484]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2484 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:40.211000 audit[2484]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe9f4a5f00 a2=0 a3=7ffe9f4a5eec items=0 ppid=2426 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.211000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 16:19:40.214000 audit[2486]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2486 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:40.214000 audit[2486]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd74123960 a2=0 a3=7ffd7412394c items=0 ppid=2426 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.214000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 16:19:40.218000 audit[2489]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2489 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:40.218000 audit[2489]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdf2f45820 a2=0 a3=7ffdf2f4580c items=0 ppid=2426 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.218000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jun 25 16:19:40.219000 audit[2490]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2490 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:40.219000 audit[2490]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc13805fe0 a2=0 a3=7ffc13805fcc items=0 ppid=2426 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.219000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 16:19:40.221000 audit[2492]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2492 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:40.221000 audit[2492]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe41d87620 a2=0 a3=7ffe41d8760c items=0 ppid=2426 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.221000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 16:19:40.223000 audit[2493]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2493 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:40.223000 audit[2493]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe04c4f1c0 a2=0 a3=7ffe04c4f1ac items=0 ppid=2426 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.223000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 16:19:40.225000 audit[2495]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2495 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:40.225000 audit[2495]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffccf2c9af0 a2=0 a3=7ffccf2c9adc items=0 ppid=2426 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.225000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:19:40.229000 audit[2498]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2498 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:40.229000 audit[2498]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc540afe20 a2=0 a3=7ffc540afe0c items=0 ppid=2426 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.229000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:19:40.232000 audit[2501]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2501 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:40.232000 audit[2501]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff27e3aa90 a2=0 a3=7fff27e3aa7c items=0 ppid=2426 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.232000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 16:19:40.233000 audit[2502]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2502 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:40.233000 audit[2502]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffea935a310 a2=0 a3=7ffea935a2fc items=0 ppid=2426 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.233000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 16:19:40.236000 audit[2504]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2504 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:40.236000 audit[2504]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe71c53f80 a2=0 a3=7ffe71c53f6c items=0 ppid=2426 pid=2504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.236000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:19:40.240000 audit[2507]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2507 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:40.240000 audit[2507]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff3008a790 a2=0 a3=7fff3008a77c items=0 ppid=2426 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.240000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:19:40.241000 audit[2508]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2508 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:40.241000 audit[2508]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcfe8804c0 a2=0 a3=7ffcfe8804ac items=0 ppid=2426 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.241000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 16:19:40.243000 audit[2510]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2510 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 16:19:40.243000 audit[2510]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffff18113f0 a2=0 a3=7ffff18113dc items=0 ppid=2426 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.243000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 16:19:40.302000 audit[2516]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2516 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:19:40.302000 audit[2516]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffd4168d260 a2=0 a3=7ffd4168d24c items=0 ppid=2426 pid=2516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.302000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:19:40.318000 audit[2516]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2516 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:19:40.318000 audit[2516]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd4168d260 a2=0 a3=7ffd4168d24c items=0 ppid=2426 pid=2516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.318000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:19:40.323000 audit[2523]: NETFILTER_CFG table=filter:65 family=2 entries=14 op=nft_register_rule pid=2523 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:19:40.323000 audit[2523]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffe25371aa0 a2=0 a3=7ffe25371a8c items=0 ppid=2426 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.323000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:19:40.323000 audit[2523]: NETFILTER_CFG table=nat:66 family=2 entries=12 op=nft_register_rule pid=2523 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:19:40.323000 audit[2523]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe25371aa0 a2=0 a3=0 items=0 ppid=2426 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.323000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:19:40.336000 audit[2524]: NETFILTER_CFG table=filter:67 family=10 entries=1 op=nft_register_chain pid=2524 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:19:40.336000 audit[2524]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd03705bd0 a2=0 a3=7ffd03705bbc items=0 ppid=2426 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.336000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 16:19:40.339000 audit[2526]: NETFILTER_CFG table=filter:68 family=10 entries=2 op=nft_register_chain pid=2526 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:19:40.339000 audit[2526]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fffe7f209c0 a2=0 a3=7fffe7f209ac items=0 ppid=2426 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.339000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jun 25 16:19:40.349000 audit[2529]: NETFILTER_CFG table=filter:69 family=10 entries=2 op=nft_register_chain pid=2529 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:19:40.349000 audit[2529]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc6d585fc0 a2=0 a3=7ffc6d585fac items=0 ppid=2426 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.349000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jun 25 16:19:40.351000 audit[2530]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2530 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:19:40.351000 audit[2530]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc73073ab0 a2=0 a3=7ffc73073a9c items=0 ppid=2426 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.351000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 16:19:40.353000 audit[2532]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2532 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:19:40.353000 audit[2532]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe50d14bf0 a2=0 a3=7ffe50d14bdc items=0 ppid=2426 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.353000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 16:19:40.354000 audit[2533]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_chain pid=2533 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:19:40.354000 audit[2533]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe683a5440 a2=0 a3=7ffe683a542c items=0 ppid=2426 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.354000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 16:19:40.356000 audit[2535]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=2535 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:19:40.356000 audit[2535]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff63955a00 a2=0 a3=7fff639559ec items=0 ppid=2426 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.356000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jun 25 16:19:40.360000 audit[2538]: NETFILTER_CFG table=filter:74 family=10 entries=2 op=nft_register_chain pid=2538 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:19:40.360000 audit[2538]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd059176a0 a2=0 a3=7ffd0591768c items=0 ppid=2426 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.360000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 16:19:40.361000 audit[2539]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2539 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:19:40.361000 audit[2539]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcbf97f0f0 a2=0 a3=7ffcbf97f0dc items=0 ppid=2426 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.361000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 16:19:40.364000 audit[2541]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2541 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:19:40.364000 audit[2541]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe88ad20c0 a2=0 a3=7ffe88ad20ac items=0 ppid=2426 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.364000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 16:19:40.365000 audit[2542]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_chain pid=2542 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:19:40.365000 audit[2542]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc6b302030 a2=0 a3=7ffc6b30201c items=0 ppid=2426 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.365000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 16:19:40.368000 audit[2544]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2544 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:19:40.368000 audit[2544]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd24e489c0 a2=0 a3=7ffd24e489ac items=0 ppid=2426 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.368000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 16:19:40.372000 audit[2547]: NETFILTER_CFG table=filter:79 family=10 entries=1 op=nft_register_rule pid=2547 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:19:40.372000 audit[2547]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcea250c60 a2=0 a3=7ffcea250c4c items=0 ppid=2426 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.372000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 16:19:40.375000 audit[2550]: NETFILTER_CFG table=filter:80 family=10 entries=1 op=nft_register_rule pid=2550 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:19:40.375000 audit[2550]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe4dc01490 a2=0 a3=7ffe4dc0147c items=0 ppid=2426 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.375000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jun 25 16:19:40.377000 audit[2551]: NETFILTER_CFG table=nat:81 family=10 entries=1 op=nft_register_chain pid=2551 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:19:40.377000 audit[2551]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd2f939c20 a2=0 a3=7ffd2f939c0c items=0 ppid=2426 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.377000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 16:19:40.379000 audit[2553]: NETFILTER_CFG table=nat:82 family=10 entries=2 op=nft_register_chain pid=2553 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:19:40.379000 audit[2553]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffca6414d40 a2=0 a3=7ffca6414d2c items=0 ppid=2426 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.379000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:19:40.382000 audit[2556]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2556 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:19:40.382000 audit[2556]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffeb72d9850 a2=0 a3=7ffeb72d983c items=0 ppid=2426 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.382000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 16:19:40.383000 audit[2557]: NETFILTER_CFG table=nat:84 family=10 entries=1 op=nft_register_chain pid=2557 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:19:40.383000 audit[2557]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdd24fa8e0 a2=0 a3=7ffdd24fa8cc items=0 ppid=2426 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.383000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 16:19:40.385000 audit[2559]: NETFILTER_CFG table=nat:85 family=10 entries=2 op=nft_register_chain pid=2559 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:19:40.385000 audit[2559]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffedaa54e90 a2=0 a3=7ffedaa54e7c items=0 ppid=2426 pid=2559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.385000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 16:19:40.386000 audit[2560]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=2560 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:19:40.386000 audit[2560]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd546d0d90 a2=0 a3=7ffd546d0d7c items=0 ppid=2426 pid=2560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.386000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 16:19:40.388000 audit[2562]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=2562 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:19:40.388000 audit[2562]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc9dae19e0 a2=0 a3=7ffc9dae19cc items=0 ppid=2426 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.388000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:19:40.391000 audit[2565]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=2565 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 16:19:40.391000 audit[2565]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcc20ac3f0 a2=0 a3=7ffcc20ac3dc items=0 ppid=2426 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.391000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 16:19:40.394000 audit[2567]: NETFILTER_CFG table=filter:89 family=10 entries=3 op=nft_register_rule pid=2567 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 16:19:40.394000 audit[2567]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7fff74f3cbc0 a2=0 a3=7fff74f3cbac items=0 ppid=2426 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.394000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:19:40.395000 audit[2567]: NETFILTER_CFG table=nat:90 family=10 entries=7 op=nft_register_chain pid=2567 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 16:19:40.395000 audit[2567]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fff74f3cbc0 a2=0 a3=7fff74f3cbac items=0 ppid=2426 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.395000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:19:40.447531 containerd[1287]: time="2024-06-25T16:19:40.447212109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:19:40.447697 containerd[1287]: time="2024-06-25T16:19:40.447562047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:19:40.447697 containerd[1287]: time="2024-06-25T16:19:40.447583497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:19:40.447697 containerd[1287]: time="2024-06-25T16:19:40.447638151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:19:40.450595 kubelet[2281]: E0625 16:19:40.449520 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:40.477686 systemd[1]: Started cri-containerd-2f0f00b2c2922fbf98c26a59569ca8ecfcf0622b05971d7a051595eabf9b5484.scope - libcontainer container 2f0f00b2c2922fbf98c26a59569ca8ecfcf0622b05971d7a051595eabf9b5484. Jun 25 16:19:40.487527 kubelet[2281]: I0625 16:19:40.487476 2281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-2bcg5" podStartSLOduration=2.4874319209999998 podCreationTimestamp="2024-06-25 16:19:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:19:40.467003395 +0000 UTC m=+17.128047237" watchObservedRunningTime="2024-06-25 16:19:40.487431921 +0000 UTC m=+17.148475763" Jun 25 16:19:40.486000 audit: BPF prog-id=109 op=LOAD Jun 25 16:19:40.487000 audit: BPF prog-id=110 op=LOAD Jun 25 16:19:40.487000 audit[2586]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2576 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266306630306232633239323266626639386332366135393536396361 Jun 25 16:19:40.487000 audit: BPF prog-id=111 op=LOAD Jun 25 16:19:40.487000 audit[2586]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2576 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266306630306232633239323266626639386332366135393536396361 Jun 25 16:19:40.487000 audit: BPF prog-id=111 op=UNLOAD Jun 25 16:19:40.487000 audit: BPF prog-id=110 op=UNLOAD Jun 25 16:19:40.487000 audit: BPF prog-id=112 op=LOAD Jun 25 16:19:40.487000 audit[2586]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=2576 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:40.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266306630306232633239323266626639386332366135393536396361 Jun 25 16:19:40.513631 containerd[1287]: time="2024-06-25T16:19:40.513487697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-7px8z,Uid:422db163-f400-4cc3-9c34-3724d685be5c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2f0f00b2c2922fbf98c26a59569ca8ecfcf0622b05971d7a051595eabf9b5484\"" Jun 25 16:19:40.519480 containerd[1287]: time="2024-06-25T16:19:40.519439388Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 16:19:43.034208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2950593202.mount: Deactivated successfully. Jun 25 16:19:46.315183 containerd[1287]: time="2024-06-25T16:19:46.315119508Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:19:46.345693 containerd[1287]: time="2024-06-25T16:19:46.345591955Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076068" Jun 25 16:19:46.388082 containerd[1287]: time="2024-06-25T16:19:46.388010275Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:19:46.436349 containerd[1287]: time="2024-06-25T16:19:46.436283409Z" level=info msg="ImageUpdate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:19:46.457177 containerd[1287]: time="2024-06-25T16:19:46.457099746Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:19:46.457787 containerd[1287]: time="2024-06-25T16:19:46.457721986Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 5.938112828s" Jun 25 16:19:46.457787 containerd[1287]: time="2024-06-25T16:19:46.457769314Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jun 25 16:19:46.460257 containerd[1287]: time="2024-06-25T16:19:46.460193330Z" level=info msg="CreateContainer within sandbox \"2f0f00b2c2922fbf98c26a59569ca8ecfcf0622b05971d7a051595eabf9b5484\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 16:19:46.496387 containerd[1287]: time="2024-06-25T16:19:46.496182697Z" level=info msg="CreateContainer within sandbox \"2f0f00b2c2922fbf98c26a59569ca8ecfcf0622b05971d7a051595eabf9b5484\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b77405336c0d9eb3c80f6b977aee59a1ac544e74f5eb03ff388f7df67209abd1\"" Jun 25 16:19:46.497031 containerd[1287]: time="2024-06-25T16:19:46.496980096Z" level=info msg="StartContainer for \"b77405336c0d9eb3c80f6b977aee59a1ac544e74f5eb03ff388f7df67209abd1\"" Jun 25 16:19:46.525536 systemd[1]: Started cri-containerd-b77405336c0d9eb3c80f6b977aee59a1ac544e74f5eb03ff388f7df67209abd1.scope - libcontainer container b77405336c0d9eb3c80f6b977aee59a1ac544e74f5eb03ff388f7df67209abd1. Jun 25 16:19:46.537000 audit: BPF prog-id=113 op=LOAD Jun 25 16:19:46.539809 kernel: kauditd_printk_skb: 196 callbacks suppressed Jun 25 16:19:46.539907 kernel: audit: type=1334 audit(1719332386.537:452): prog-id=113 op=LOAD Jun 25 16:19:46.537000 audit: BPF prog-id=114 op=LOAD Jun 25 16:19:46.541861 kernel: audit: type=1334 audit(1719332386.537:453): prog-id=114 op=LOAD Jun 25 16:19:46.537000 audit[2629]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=2576 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:46.546700 kernel: audit: type=1300 audit(1719332386.537:453): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=2576 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:46.546848 kernel: audit: type=1327 audit(1719332386.537:453): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237373430353333366330643965623363383066366239373761656535 Jun 25 16:19:46.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237373430353333366330643965623363383066366239373761656535 Jun 25 16:19:46.552559 kernel: audit: type=1334 audit(1719332386.537:454): prog-id=115 op=LOAD Jun 25 16:19:46.537000 audit: BPF prog-id=115 op=LOAD Jun 25 16:19:46.557537 kernel: audit: type=1300 audit(1719332386.537:454): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=2576 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:46.537000 audit[2629]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=2576 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:46.562493 kernel: audit: type=1327 audit(1719332386.537:454): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237373430353333366330643965623363383066366239373761656535 Jun 25 16:19:46.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237373430353333366330643965623363383066366239373761656535 Jun 25 16:19:46.564443 kernel: audit: type=1334 audit(1719332386.537:455): prog-id=115 op=UNLOAD Jun 25 16:19:46.537000 audit: BPF prog-id=115 op=UNLOAD Jun 25 16:19:46.565744 kernel: audit: type=1334 audit(1719332386.537:456): prog-id=114 op=UNLOAD Jun 25 16:19:46.537000 audit: BPF prog-id=114 op=UNLOAD Jun 25 16:19:46.567462 kernel: audit: type=1334 audit(1719332386.537:457): prog-id=116 op=LOAD Jun 25 16:19:46.537000 audit: BPF prog-id=116 op=LOAD Jun 25 16:19:46.567565 containerd[1287]: time="2024-06-25T16:19:46.566531516Z" level=info msg="StartContainer for \"b77405336c0d9eb3c80f6b977aee59a1ac544e74f5eb03ff388f7df67209abd1\" returns successfully" Jun 25 16:19:46.537000 audit[2629]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=2576 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:46.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237373430353333366330643965623363383066366239373761656535 Jun 25 16:19:49.618000 audit[2662]: NETFILTER_CFG table=filter:91 family=2 entries=15 op=nft_register_rule pid=2662 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:19:49.618000 audit[2662]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc41254920 a2=0 a3=7ffc4125490c items=0 ppid=2426 pid=2662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:49.618000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:19:49.619000 audit[2662]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2662 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:19:49.619000 audit[2662]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc41254920 a2=0 a3=0 items=0 ppid=2426 pid=2662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:49.619000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:19:49.628000 audit[2664]: NETFILTER_CFG table=filter:93 family=2 entries=16 op=nft_register_rule pid=2664 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:19:49.628000 audit[2664]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fffbe3a18c0 a2=0 a3=7fffbe3a18ac items=0 ppid=2426 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:49.628000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:19:49.629000 audit[2664]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2664 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:19:49.629000 audit[2664]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffbe3a18c0 a2=0 a3=0 items=0 ppid=2426 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:49.629000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:19:49.746791 kubelet[2281]: I0625 16:19:49.746735 2281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-7px8z" podStartSLOduration=4.803597326 podCreationTimestamp="2024-06-25 16:19:39 +0000 UTC" firstStartedPulling="2024-06-25 16:19:40.514997388 +0000 UTC m=+17.176041230" lastFinishedPulling="2024-06-25 16:19:46.458083896 +0000 UTC m=+23.119127748" observedRunningTime="2024-06-25 16:19:47.468948206 +0000 UTC m=+24.129992059" watchObservedRunningTime="2024-06-25 16:19:49.746683844 +0000 UTC m=+26.407727696" Jun 25 16:19:49.747270 kubelet[2281]: I0625 16:19:49.746931 2281 topology_manager.go:215] "Topology Admit Handler" podUID="c7b2d332-bc1c-4bfa-bfae-9b777344bf2d" podNamespace="calico-system" podName="calico-typha-589d585b6d-s2bjj" Jun 25 16:19:49.754727 systemd[1]: Created slice kubepods-besteffort-podc7b2d332_bc1c_4bfa_bfae_9b777344bf2d.slice - libcontainer container kubepods-besteffort-podc7b2d332_bc1c_4bfa_bfae_9b777344bf2d.slice. Jun 25 16:19:49.772726 kubelet[2281]: I0625 16:19:49.772672 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c7b2d332-bc1c-4bfa-bfae-9b777344bf2d-typha-certs\") pod \"calico-typha-589d585b6d-s2bjj\" (UID: \"c7b2d332-bc1c-4bfa-bfae-9b777344bf2d\") " pod="calico-system/calico-typha-589d585b6d-s2bjj" Jun 25 16:19:49.772726 kubelet[2281]: I0625 16:19:49.772728 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7b2d332-bc1c-4bfa-bfae-9b777344bf2d-tigera-ca-bundle\") pod \"calico-typha-589d585b6d-s2bjj\" (UID: \"c7b2d332-bc1c-4bfa-bfae-9b777344bf2d\") " pod="calico-system/calico-typha-589d585b6d-s2bjj" Jun 25 16:19:49.772965 kubelet[2281]: I0625 16:19:49.772758 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd7vq\" (UniqueName: \"kubernetes.io/projected/c7b2d332-bc1c-4bfa-bfae-9b777344bf2d-kube-api-access-vd7vq\") pod \"calico-typha-589d585b6d-s2bjj\" (UID: \"c7b2d332-bc1c-4bfa-bfae-9b777344bf2d\") " pod="calico-system/calico-typha-589d585b6d-s2bjj" Jun 25 16:19:49.804558 kubelet[2281]: I0625 16:19:49.804524 2281 topology_manager.go:215] "Topology Admit Handler" podUID="922df580-a65a-49ff-9335-9d705be62ed6" podNamespace="calico-system" podName="calico-node-tzhl5" Jun 25 16:19:49.810265 systemd[1]: Created slice kubepods-besteffort-pod922df580_a65a_49ff_9335_9d705be62ed6.slice - libcontainer container kubepods-besteffort-pod922df580_a65a_49ff_9335_9d705be62ed6.slice. Jun 25 16:19:49.874270 kubelet[2281]: I0625 16:19:49.874066 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-lib-modules\") pod \"calico-node-tzhl5\" (UID: \"922df580-a65a-49ff-9335-9d705be62ed6\") " pod="calico-system/calico-node-tzhl5" Jun 25 16:19:49.874270 kubelet[2281]: I0625 16:19:49.874138 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-var-run-calico\") pod \"calico-node-tzhl5\" (UID: \"922df580-a65a-49ff-9335-9d705be62ed6\") " pod="calico-system/calico-node-tzhl5" Jun 25 16:19:49.874483 kubelet[2281]: I0625 16:19:49.874314 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-xtables-lock\") pod \"calico-node-tzhl5\" (UID: \"922df580-a65a-49ff-9335-9d705be62ed6\") " pod="calico-system/calico-node-tzhl5" Jun 25 16:19:49.874483 kubelet[2281]: I0625 16:19:49.874366 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-cni-log-dir\") pod \"calico-node-tzhl5\" (UID: \"922df580-a65a-49ff-9335-9d705be62ed6\") " pod="calico-system/calico-node-tzhl5" Jun 25 16:19:49.874483 kubelet[2281]: I0625 16:19:49.874396 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/922df580-a65a-49ff-9335-9d705be62ed6-tigera-ca-bundle\") pod \"calico-node-tzhl5\" (UID: \"922df580-a65a-49ff-9335-9d705be62ed6\") " pod="calico-system/calico-node-tzhl5" Jun 25 16:19:49.874483 kubelet[2281]: I0625 16:19:49.874420 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-cni-net-dir\") pod \"calico-node-tzhl5\" (UID: \"922df580-a65a-49ff-9335-9d705be62ed6\") " pod="calico-system/calico-node-tzhl5" Jun 25 16:19:49.874483 kubelet[2281]: I0625 16:19:49.874447 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-policysync\") pod \"calico-node-tzhl5\" (UID: \"922df580-a65a-49ff-9335-9d705be62ed6\") " pod="calico-system/calico-node-tzhl5" Jun 25 16:19:49.874619 kubelet[2281]: I0625 16:19:49.874470 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/922df580-a65a-49ff-9335-9d705be62ed6-node-certs\") pod \"calico-node-tzhl5\" (UID: \"922df580-a65a-49ff-9335-9d705be62ed6\") " pod="calico-system/calico-node-tzhl5" Jun 25 16:19:49.874619 kubelet[2281]: I0625 16:19:49.874499 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-cni-bin-dir\") pod \"calico-node-tzhl5\" (UID: \"922df580-a65a-49ff-9335-9d705be62ed6\") " pod="calico-system/calico-node-tzhl5" Jun 25 16:19:49.874619 kubelet[2281]: I0625 16:19:49.874528 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-flexvol-driver-host\") pod \"calico-node-tzhl5\" (UID: \"922df580-a65a-49ff-9335-9d705be62ed6\") " pod="calico-system/calico-node-tzhl5" Jun 25 16:19:49.874619 kubelet[2281]: I0625 16:19:49.874554 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-var-lib-calico\") pod \"calico-node-tzhl5\" (UID: \"922df580-a65a-49ff-9335-9d705be62ed6\") " pod="calico-system/calico-node-tzhl5" Jun 25 16:19:49.874619 kubelet[2281]: I0625 16:19:49.874586 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dm72\" (UniqueName: \"kubernetes.io/projected/922df580-a65a-49ff-9335-9d705be62ed6-kube-api-access-2dm72\") pod \"calico-node-tzhl5\" (UID: \"922df580-a65a-49ff-9335-9d705be62ed6\") " pod="calico-system/calico-node-tzhl5" Jun 25 16:19:49.914517 kubelet[2281]: I0625 16:19:49.914475 2281 topology_manager.go:215] "Topology Admit Handler" podUID="77a47d09-249f-41aa-9f0e-6a405db06ba3" podNamespace="calico-system" podName="csi-node-driver-v8xmb" Jun 25 16:19:49.915036 kubelet[2281]: E0625 16:19:49.915018 2281 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8xmb" podUID="77a47d09-249f-41aa-9f0e-6a405db06ba3" Jun 25 16:19:49.975203 kubelet[2281]: I0625 16:19:49.975152 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77a47d09-249f-41aa-9f0e-6a405db06ba3-kubelet-dir\") pod \"csi-node-driver-v8xmb\" (UID: \"77a47d09-249f-41aa-9f0e-6a405db06ba3\") " pod="calico-system/csi-node-driver-v8xmb" Jun 25 16:19:49.975203 kubelet[2281]: I0625 16:19:49.975230 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/77a47d09-249f-41aa-9f0e-6a405db06ba3-varrun\") pod \"csi-node-driver-v8xmb\" (UID: \"77a47d09-249f-41aa-9f0e-6a405db06ba3\") " pod="calico-system/csi-node-driver-v8xmb" Jun 25 16:19:49.975441 kubelet[2281]: I0625 16:19:49.975342 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfkcm\" (UniqueName: \"kubernetes.io/projected/77a47d09-249f-41aa-9f0e-6a405db06ba3-kube-api-access-kfkcm\") pod \"csi-node-driver-v8xmb\" (UID: \"77a47d09-249f-41aa-9f0e-6a405db06ba3\") " pod="calico-system/csi-node-driver-v8xmb" Jun 25 16:19:49.975441 kubelet[2281]: I0625 16:19:49.975421 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/77a47d09-249f-41aa-9f0e-6a405db06ba3-registration-dir\") pod \"csi-node-driver-v8xmb\" (UID: \"77a47d09-249f-41aa-9f0e-6a405db06ba3\") " pod="calico-system/csi-node-driver-v8xmb" Jun 25 16:19:49.975583 kubelet[2281]: I0625 16:19:49.975554 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/77a47d09-249f-41aa-9f0e-6a405db06ba3-socket-dir\") pod \"csi-node-driver-v8xmb\" (UID: \"77a47d09-249f-41aa-9f0e-6a405db06ba3\") " pod="calico-system/csi-node-driver-v8xmb" Jun 25 16:19:49.976748 kubelet[2281]: E0625 16:19:49.976732 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:49.976830 kubelet[2281]: W0625 16:19:49.976817 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:49.976909 kubelet[2281]: E0625 16:19:49.976901 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:49.977294 kubelet[2281]: E0625 16:19:49.977279 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:49.977369 kubelet[2281]: W0625 16:19:49.977358 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:49.977422 kubelet[2281]: E0625 16:19:49.977416 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:49.977946 kubelet[2281]: E0625 16:19:49.977936 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:49.978005 kubelet[2281]: W0625 16:19:49.977997 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:49.978122 kubelet[2281]: E0625 16:19:49.978100 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:49.978282 kubelet[2281]: E0625 16:19:49.978273 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:49.978357 kubelet[2281]: W0625 16:19:49.978346 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:49.978495 kubelet[2281]: E0625 16:19:49.978485 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:49.978748 kubelet[2281]: E0625 16:19:49.978730 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:49.978809 kubelet[2281]: W0625 16:19:49.978800 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:49.978960 kubelet[2281]: E0625 16:19:49.978951 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:49.981437 kubelet[2281]: E0625 16:19:49.981410 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:49.981531 kubelet[2281]: W0625 16:19:49.981515 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:49.981640 kubelet[2281]: E0625 16:19:49.981628 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:49.983393 kubelet[2281]: E0625 16:19:49.983380 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:49.983478 kubelet[2281]: W0625 16:19:49.983466 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:49.983558 kubelet[2281]: E0625 16:19:49.983548 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:49.983788 kubelet[2281]: E0625 16:19:49.983768 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:49.983788 kubelet[2281]: W0625 16:19:49.983785 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:49.983867 kubelet[2281]: E0625 16:19:49.983813 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:49.984014 kubelet[2281]: E0625 16:19:49.983995 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:49.984178 kubelet[2281]: W0625 16:19:49.984142 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:49.984178 kubelet[2281]: E0625 16:19:49.984164 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:49.984515 kubelet[2281]: E0625 16:19:49.984500 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:49.984594 kubelet[2281]: W0625 16:19:49.984582 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:49.984659 kubelet[2281]: E0625 16:19:49.984650 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:49.991402 kubelet[2281]: E0625 16:19:49.991372 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:49.991402 kubelet[2281]: W0625 16:19:49.991389 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:49.991402 kubelet[2281]: E0625 16:19:49.991410 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:50.050366 kubelet[2281]: E0625 16:19:50.050331 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:50.051557 containerd[1287]: time="2024-06-25T16:19:50.051507916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tzhl5,Uid:922df580-a65a-49ff-9335-9d705be62ed6,Namespace:calico-system,Attempt:0,}" Jun 25 16:19:50.058138 kubelet[2281]: E0625 16:19:50.058091 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:50.058665 containerd[1287]: time="2024-06-25T16:19:50.058616830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-589d585b6d-s2bjj,Uid:c7b2d332-bc1c-4bfa-bfae-9b777344bf2d,Namespace:calico-system,Attempt:0,}" Jun 25 16:19:50.077485 kubelet[2281]: E0625 16:19:50.077444 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:50.077485 kubelet[2281]: W0625 16:19:50.077469 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:50.077485 kubelet[2281]: E0625 16:19:50.077492 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:50.077830 kubelet[2281]: E0625 16:19:50.077807 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:50.077863 kubelet[2281]: W0625 16:19:50.077825 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:50.077863 kubelet[2281]: E0625 16:19:50.077851 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:50.078099 kubelet[2281]: E0625 16:19:50.078079 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:50.078099 kubelet[2281]: W0625 16:19:50.078093 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:50.078169 kubelet[2281]: E0625 16:19:50.078114 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:50.078324 kubelet[2281]: E0625 16:19:50.078303 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:50.078324 kubelet[2281]: W0625 16:19:50.078316 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:50.078390 kubelet[2281]: E0625 16:19:50.078335 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:50.078545 kubelet[2281]: E0625 16:19:50.078525 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:50.078545 kubelet[2281]: W0625 16:19:50.078540 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:50.078599 kubelet[2281]: E0625 16:19:50.078552 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:50.080323 kubelet[2281]: E0625 16:19:50.080301 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:50.080323 kubelet[2281]: W0625 16:19:50.080317 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:50.080403 kubelet[2281]: E0625 16:19:50.080345 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:50.080558 kubelet[2281]: E0625 16:19:50.080537 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:50.080558 kubelet[2281]: W0625 16:19:50.080551 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:50.080633 kubelet[2281]: E0625 16:19:50.080614 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:50.080725 kubelet[2281]: E0625 16:19:50.080709 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:50.080725 kubelet[2281]: W0625 16:19:50.080720 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:50.080776 kubelet[2281]: E0625 16:19:50.080770 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:50.080888 kubelet[2281]: E0625 16:19:50.080859 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:50.080888 kubelet[2281]: W0625 16:19:50.080876 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:50.080944 kubelet[2281]: E0625 16:19:50.080929 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:50.081046 kubelet[2281]: E0625 16:19:50.081029 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:50.081046 kubelet[2281]: W0625 16:19:50.081042 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:50.081113 kubelet[2281]: E0625 16:19:50.081096 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:50.083333 kubelet[2281]: E0625 16:19:50.081191 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:50.083333 kubelet[2281]: W0625 16:19:50.081200 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:50.083333 kubelet[2281]: E0625 16:19:50.081284 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:50.083333 kubelet[2281]: E0625 16:19:50.081326 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:50.083333 kubelet[2281]: W0625 16:19:50.081341 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:50.083333 kubelet[2281]: E0625 16:19:50.081352 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:50.083333 kubelet[2281]: E0625 16:19:50.082261 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:50.083333 kubelet[2281]: W0625 16:19:50.082268 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:50.083333 kubelet[2281]: E0625 16:19:50.082331 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:50.083333 kubelet[2281]: E0625 16:19:50.082437 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:50.083560 kubelet[2281]: W0625 16:19:50.082443 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:50.083560 kubelet[2281]: E0625 16:19:50.082492 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:50.083560 kubelet[2281]: E0625 16:19:50.082565 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:50.083560 kubelet[2281]: W0625 16:19:50.082579 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:50.083560 kubelet[2281]: E0625 16:19:50.082626 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:50.092245 kubelet[2281]: E0625 16:19:50.089372 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:50.092245 kubelet[2281]: W0625 16:19:50.089401 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:50.092245 kubelet[2281]: E0625 16:19:50.089515 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:50.092245 kubelet[2281]: E0625 16:19:50.089649 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:50.092245 kubelet[2281]: W0625 16:19:50.089657 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:50.092245 kubelet[2281]: E0625 16:19:50.089724 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:50.092245 kubelet[2281]: E0625 16:19:50.089811 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:50.092245 kubelet[2281]: W0625 16:19:50.089817 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:50.092245 kubelet[2281]: E0625 16:19:50.089867 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:50.092245 kubelet[2281]: E0625 16:19:50.089990 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:50.092675 kubelet[2281]: W0625 16:19:50.089995 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:50.092675 kubelet[2281]: E0625 16:19:50.090073 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:50.092675 kubelet[2281]: E0625 16:19:50.090107 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:50.092675 kubelet[2281]: W0625 16:19:50.090123 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:50.092675 kubelet[2281]: E0625 16:19:50.090134 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:50.092675 kubelet[2281]: E0625 16:19:50.090406 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:50.092675 kubelet[2281]: W0625 16:19:50.090418 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:50.092675 kubelet[2281]: E0625 16:19:50.090444 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:50.092675 kubelet[2281]: E0625 16:19:50.090593 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:50.092675 kubelet[2281]: W0625 16:19:50.090599 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:50.092938 kubelet[2281]: E0625 16:19:50.090613 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:50.092938 kubelet[2281]: E0625 16:19:50.090792 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:50.092938 kubelet[2281]: W0625 16:19:50.090799 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:50.092938 kubelet[2281]: E0625 16:19:50.090810 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:50.092938 kubelet[2281]: E0625 16:19:50.090955 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:50.092938 kubelet[2281]: W0625 16:19:50.090963 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:50.092938 kubelet[2281]: E0625 16:19:50.090974 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:50.092938 kubelet[2281]: E0625 16:19:50.091104 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:50.092938 kubelet[2281]: W0625 16:19:50.091111 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:50.092938 kubelet[2281]: E0625 16:19:50.091121 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:50.100385 kubelet[2281]: E0625 16:19:50.100354 2281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 16:19:50.100385 kubelet[2281]: W0625 16:19:50.100377 2281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 16:19:50.100520 kubelet[2281]: E0625 16:19:50.100397 2281 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 16:19:50.108515 containerd[1287]: time="2024-06-25T16:19:50.108414754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:19:50.108667 containerd[1287]: time="2024-06-25T16:19:50.108535762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:19:50.108667 containerd[1287]: time="2024-06-25T16:19:50.108563434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:19:50.108667 containerd[1287]: time="2024-06-25T16:19:50.108586818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:19:50.132449 systemd[1]: Started cri-containerd-728ae3e3d3bdfb163d18c759b203cb7f947730d73c1306e108762ba4d2cd926f.scope - libcontainer container 728ae3e3d3bdfb163d18c759b203cb7f947730d73c1306e108762ba4d2cd926f. Jun 25 16:19:50.137834 containerd[1287]: time="2024-06-25T16:19:50.137731125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:19:50.138040 containerd[1287]: time="2024-06-25T16:19:50.138018084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:19:50.138153 containerd[1287]: time="2024-06-25T16:19:50.138117761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:19:50.138265 containerd[1287]: time="2024-06-25T16:19:50.138238438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:19:50.143000 audit: BPF prog-id=117 op=LOAD Jun 25 16:19:50.143000 audit: BPF prog-id=118 op=LOAD Jun 25 16:19:50.143000 audit[2725]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2705 pid=2725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:50.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732386165336533643362646662313633643138633735396232303363 Jun 25 16:19:50.143000 audit: BPF prog-id=119 op=LOAD Jun 25 16:19:50.143000 audit[2725]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2705 pid=2725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:50.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732386165336533643362646662313633643138633735396232303363 Jun 25 16:19:50.143000 audit: BPF prog-id=119 op=UNLOAD Jun 25 16:19:50.143000 audit: BPF prog-id=118 op=UNLOAD Jun 25 16:19:50.143000 audit: BPF prog-id=120 op=LOAD Jun 25 16:19:50.143000 audit[2725]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2705 pid=2725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:50.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732386165336533643362646662313633643138633735396232303363 Jun 25 16:19:50.153352 systemd[1]: Started cri-containerd-e09a51149c20357e7a054fe644e27ed45b4640e29f39d8ca164506861c9824d2.scope - libcontainer container e09a51149c20357e7a054fe644e27ed45b4640e29f39d8ca164506861c9824d2. Jun 25 16:19:50.165472 containerd[1287]: time="2024-06-25T16:19:50.165420399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tzhl5,Uid:922df580-a65a-49ff-9335-9d705be62ed6,Namespace:calico-system,Attempt:0,} returns sandbox id \"728ae3e3d3bdfb163d18c759b203cb7f947730d73c1306e108762ba4d2cd926f\"" Jun 25 16:19:50.166076 kubelet[2281]: E0625 16:19:50.166047 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:50.165000 audit: BPF prog-id=121 op=LOAD Jun 25 16:19:50.167273 containerd[1287]: time="2024-06-25T16:19:50.167245207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 16:19:50.166000 audit: BPF prog-id=122 op=LOAD Jun 25 16:19:50.166000 audit[2759]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2740 pid=2759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:50.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530396135313134396332303335376537613035346665363434653237 Jun 25 16:19:50.166000 audit: BPF prog-id=123 op=LOAD Jun 25 16:19:50.166000 audit[2759]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2740 pid=2759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:50.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530396135313134396332303335376537613035346665363434653237 Jun 25 16:19:50.166000 audit: BPF prog-id=123 op=UNLOAD Jun 25 16:19:50.166000 audit: BPF prog-id=122 op=UNLOAD Jun 25 16:19:50.166000 audit: BPF prog-id=124 op=LOAD Jun 25 16:19:50.166000 audit[2759]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2740 pid=2759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:50.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530396135313134396332303335376537613035346665363434653237 Jun 25 16:19:50.196187 containerd[1287]: time="2024-06-25T16:19:50.196134545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-589d585b6d-s2bjj,Uid:c7b2d332-bc1c-4bfa-bfae-9b777344bf2d,Namespace:calico-system,Attempt:0,} returns sandbox id \"e09a51149c20357e7a054fe644e27ed45b4640e29f39d8ca164506861c9824d2\"" Jun 25 16:19:50.197413 kubelet[2281]: E0625 16:19:50.197385 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:50.646000 audit[2788]: NETFILTER_CFG table=filter:95 family=2 entries=16 op=nft_register_rule pid=2788 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:19:50.646000 audit[2788]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff3a7e8730 a2=0 a3=7fff3a7e871c items=0 ppid=2426 pid=2788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:50.646000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:19:50.647000 audit[2788]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2788 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:19:50.647000 audit[2788]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff3a7e8730 a2=0 a3=0 items=0 ppid=2426 pid=2788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:50.647000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:19:51.417752 kubelet[2281]: E0625 16:19:51.417672 2281 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8xmb" podUID="77a47d09-249f-41aa-9f0e-6a405db06ba3" Jun 25 16:19:51.550228 containerd[1287]: time="2024-06-25T16:19:51.550164357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:19:51.551000 containerd[1287]: time="2024-06-25T16:19:51.550928542Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jun 25 16:19:51.552150 containerd[1287]: time="2024-06-25T16:19:51.552120973Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:19:51.553960 containerd[1287]: time="2024-06-25T16:19:51.553897639Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:19:51.555549 containerd[1287]: time="2024-06-25T16:19:51.555499618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:19:51.556059 containerd[1287]: time="2024-06-25T16:19:51.556026338Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.388746516s" Jun 25 16:19:51.556103 containerd[1287]: time="2024-06-25T16:19:51.556060412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jun 25 16:19:51.556785 containerd[1287]: time="2024-06-25T16:19:51.556745478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 16:19:51.559134 containerd[1287]: time="2024-06-25T16:19:51.559089381Z" level=info msg="CreateContainer within sandbox \"728ae3e3d3bdfb163d18c759b203cb7f947730d73c1306e108762ba4d2cd926f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 16:19:51.576235 containerd[1287]: time="2024-06-25T16:19:51.576157478Z" level=info msg="CreateContainer within sandbox \"728ae3e3d3bdfb163d18c759b203cb7f947730d73c1306e108762ba4d2cd926f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"499d24b30b225b9d0bce5ddc59a4b92b57c6c45dc2b52368f5aa7c3dbe363623\"" Jun 25 16:19:51.576754 containerd[1287]: time="2024-06-25T16:19:51.576727569Z" level=info msg="StartContainer for \"499d24b30b225b9d0bce5ddc59a4b92b57c6c45dc2b52368f5aa7c3dbe363623\"" Jun 25 16:19:51.601521 systemd[1]: Started cri-containerd-499d24b30b225b9d0bce5ddc59a4b92b57c6c45dc2b52368f5aa7c3dbe363623.scope - libcontainer container 499d24b30b225b9d0bce5ddc59a4b92b57c6c45dc2b52368f5aa7c3dbe363623. Jun 25 16:19:51.612000 audit: BPF prog-id=125 op=LOAD Jun 25 16:19:51.615899 kernel: kauditd_printk_skb: 44 callbacks suppressed Jun 25 16:19:51.615964 kernel: audit: type=1334 audit(1719332391.612:476): prog-id=125 op=LOAD Jun 25 16:19:51.615986 kernel: audit: type=1300 audit(1719332391.612:476): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=2705 pid=2801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:51.612000 audit[2801]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=2705 pid=2801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:51.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439396432346233306232323562396430626365356464633539613462 Jun 25 16:19:51.623628 kernel: audit: type=1327 audit(1719332391.612:476): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439396432346233306232323562396430626365356464633539613462 Jun 25 16:19:51.623703 kernel: audit: type=1334 audit(1719332391.613:477): prog-id=126 op=LOAD Jun 25 16:19:51.613000 audit: BPF prog-id=126 op=LOAD Jun 25 16:19:51.613000 audit[2801]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=2705 pid=2801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:51.628239 kernel: audit: type=1300 audit(1719332391.613:477): arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=2705 pid=2801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:51.628291 kernel: audit: type=1327 audit(1719332391.613:477): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439396432346233306232323562396430626365356464633539613462 Jun 25 16:19:51.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439396432346233306232323562396430626365356464633539613462 Jun 25 16:19:51.613000 audit: BPF prog-id=126 op=UNLOAD Jun 25 16:19:51.633284 kernel: audit: type=1334 audit(1719332391.613:478): prog-id=126 op=UNLOAD Jun 25 16:19:51.633317 kernel: audit: type=1334 audit(1719332391.613:479): prog-id=125 op=UNLOAD Jun 25 16:19:51.613000 audit: BPF prog-id=125 op=UNLOAD Jun 25 16:19:51.613000 audit: BPF prog-id=127 op=LOAD Jun 25 16:19:51.635049 kernel: audit: type=1334 audit(1719332391.613:480): prog-id=127 op=LOAD Jun 25 16:19:51.635086 kernel: audit: type=1300 audit(1719332391.613:480): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=2705 pid=2801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:51.613000 audit[2801]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=2705 pid=2801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:51.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439396432346233306232323562396430626365356464633539613462 Jun 25 16:19:51.639269 systemd[1]: cri-containerd-499d24b30b225b9d0bce5ddc59a4b92b57c6c45dc2b52368f5aa7c3dbe363623.scope: Deactivated successfully. Jun 25 16:19:51.642000 audit: BPF prog-id=127 op=UNLOAD Jun 25 16:19:51.655902 containerd[1287]: time="2024-06-25T16:19:51.655841224Z" level=info msg="StartContainer for \"499d24b30b225b9d0bce5ddc59a4b92b57c6c45dc2b52368f5aa7c3dbe363623\" returns successfully" Jun 25 16:19:51.725685 containerd[1287]: time="2024-06-25T16:19:51.725511722Z" level=info msg="shim disconnected" id=499d24b30b225b9d0bce5ddc59a4b92b57c6c45dc2b52368f5aa7c3dbe363623 namespace=k8s.io Jun 25 16:19:51.725685 containerd[1287]: time="2024-06-25T16:19:51.725582084Z" level=warning msg="cleaning up after shim disconnected" id=499d24b30b225b9d0bce5ddc59a4b92b57c6c45dc2b52368f5aa7c3dbe363623 namespace=k8s.io Jun 25 16:19:51.725685 containerd[1287]: time="2024-06-25T16:19:51.725590740Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:19:51.879395 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-499d24b30b225b9d0bce5ddc59a4b92b57c6c45dc2b52368f5aa7c3dbe363623-rootfs.mount: Deactivated successfully. Jun 25 16:19:52.478092 containerd[1287]: time="2024-06-25T16:19:52.478023767Z" level=info msg="StopPodSandbox for \"728ae3e3d3bdfb163d18c759b203cb7f947730d73c1306e108762ba4d2cd926f\"" Jun 25 16:19:52.478292 containerd[1287]: time="2024-06-25T16:19:52.478140166Z" level=info msg="Container to stop \"499d24b30b225b9d0bce5ddc59a4b92b57c6c45dc2b52368f5aa7c3dbe363623\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 16:19:52.481294 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-728ae3e3d3bdfb163d18c759b203cb7f947730d73c1306e108762ba4d2cd926f-shm.mount: Deactivated successfully. Jun 25 16:19:52.483000 audit: BPF prog-id=117 op=UNLOAD Jun 25 16:19:52.484933 systemd[1]: cri-containerd-728ae3e3d3bdfb163d18c759b203cb7f947730d73c1306e108762ba4d2cd926f.scope: Deactivated successfully. Jun 25 16:19:52.489000 audit: BPF prog-id=120 op=UNLOAD Jun 25 16:19:52.503588 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-728ae3e3d3bdfb163d18c759b203cb7f947730d73c1306e108762ba4d2cd926f-rootfs.mount: Deactivated successfully. Jun 25 16:19:52.509799 containerd[1287]: time="2024-06-25T16:19:52.509729141Z" level=info msg="shim disconnected" id=728ae3e3d3bdfb163d18c759b203cb7f947730d73c1306e108762ba4d2cd926f namespace=k8s.io Jun 25 16:19:52.509799 containerd[1287]: time="2024-06-25T16:19:52.509783322Z" level=warning msg="cleaning up after shim disconnected" id=728ae3e3d3bdfb163d18c759b203cb7f947730d73c1306e108762ba4d2cd926f namespace=k8s.io Jun 25 16:19:52.509799 containerd[1287]: time="2024-06-25T16:19:52.509791057Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:19:52.521769 containerd[1287]: time="2024-06-25T16:19:52.521693169Z" level=info msg="TearDown network for sandbox \"728ae3e3d3bdfb163d18c759b203cb7f947730d73c1306e108762ba4d2cd926f\" successfully" Jun 25 16:19:52.521769 containerd[1287]: time="2024-06-25T16:19:52.521743744Z" level=info msg="StopPodSandbox for \"728ae3e3d3bdfb163d18c759b203cb7f947730d73c1306e108762ba4d2cd926f\" returns successfully" Jun 25 16:19:52.595024 kubelet[2281]: I0625 16:19:52.594970 2281 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-cni-log-dir\") pod \"922df580-a65a-49ff-9335-9d705be62ed6\" (UID: \"922df580-a65a-49ff-9335-9d705be62ed6\") " Jun 25 16:19:52.595444 kubelet[2281]: I0625 16:19:52.595052 2281 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/922df580-a65a-49ff-9335-9d705be62ed6-tigera-ca-bundle\") pod \"922df580-a65a-49ff-9335-9d705be62ed6\" (UID: \"922df580-a65a-49ff-9335-9d705be62ed6\") " Jun 25 16:19:52.595444 kubelet[2281]: I0625 16:19:52.595070 2281 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "922df580-a65a-49ff-9335-9d705be62ed6" (UID: "922df580-a65a-49ff-9335-9d705be62ed6"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:19:52.595444 kubelet[2281]: I0625 16:19:52.595125 2281 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "922df580-a65a-49ff-9335-9d705be62ed6" (UID: "922df580-a65a-49ff-9335-9d705be62ed6"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:19:52.595444 kubelet[2281]: I0625 16:19:52.595086 2281 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-var-run-calico\") pod \"922df580-a65a-49ff-9335-9d705be62ed6\" (UID: \"922df580-a65a-49ff-9335-9d705be62ed6\") " Jun 25 16:19:52.595444 kubelet[2281]: I0625 16:19:52.595187 2281 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-var-lib-calico\") pod \"922df580-a65a-49ff-9335-9d705be62ed6\" (UID: \"922df580-a65a-49ff-9335-9d705be62ed6\") " Jun 25 16:19:52.595655 kubelet[2281]: I0625 16:19:52.595243 2281 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/922df580-a65a-49ff-9335-9d705be62ed6-node-certs\") pod \"922df580-a65a-49ff-9335-9d705be62ed6\" (UID: \"922df580-a65a-49ff-9335-9d705be62ed6\") " Jun 25 16:19:52.595655 kubelet[2281]: I0625 16:19:52.595257 2281 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "922df580-a65a-49ff-9335-9d705be62ed6" (UID: "922df580-a65a-49ff-9335-9d705be62ed6"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:19:52.595655 kubelet[2281]: I0625 16:19:52.595270 2281 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-cni-net-dir\") pod \"922df580-a65a-49ff-9335-9d705be62ed6\" (UID: \"922df580-a65a-49ff-9335-9d705be62ed6\") " Jun 25 16:19:52.595655 kubelet[2281]: I0625 16:19:52.595285 2281 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "922df580-a65a-49ff-9335-9d705be62ed6" (UID: "922df580-a65a-49ff-9335-9d705be62ed6"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:19:52.595655 kubelet[2281]: I0625 16:19:52.595320 2281 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-policysync\") pod \"922df580-a65a-49ff-9335-9d705be62ed6\" (UID: \"922df580-a65a-49ff-9335-9d705be62ed6\") " Jun 25 16:19:52.595655 kubelet[2281]: I0625 16:19:52.595350 2281 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-xtables-lock\") pod \"922df580-a65a-49ff-9335-9d705be62ed6\" (UID: \"922df580-a65a-49ff-9335-9d705be62ed6\") " Jun 25 16:19:52.595784 kubelet[2281]: I0625 16:19:52.595375 2281 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-lib-modules\") pod \"922df580-a65a-49ff-9335-9d705be62ed6\" (UID: \"922df580-a65a-49ff-9335-9d705be62ed6\") " Jun 25 16:19:52.595784 kubelet[2281]: I0625 16:19:52.595413 2281 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dm72\" (UniqueName: \"kubernetes.io/projected/922df580-a65a-49ff-9335-9d705be62ed6-kube-api-access-2dm72\") pod \"922df580-a65a-49ff-9335-9d705be62ed6\" (UID: \"922df580-a65a-49ff-9335-9d705be62ed6\") " Jun 25 16:19:52.595784 kubelet[2281]: I0625 16:19:52.595439 2281 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-flexvol-driver-host\") pod \"922df580-a65a-49ff-9335-9d705be62ed6\" (UID: \"922df580-a65a-49ff-9335-9d705be62ed6\") " Jun 25 16:19:52.595784 kubelet[2281]: I0625 16:19:52.595466 2281 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-cni-bin-dir\") pod \"922df580-a65a-49ff-9335-9d705be62ed6\" (UID: \"922df580-a65a-49ff-9335-9d705be62ed6\") " Jun 25 16:19:52.595784 kubelet[2281]: I0625 16:19:52.595526 2281 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-cni-log-dir\") on node \"localhost\" DevicePath \"\"" Jun 25 16:19:52.595784 kubelet[2281]: I0625 16:19:52.595544 2281 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-var-run-calico\") on node \"localhost\" DevicePath \"\"" Jun 25 16:19:52.595936 kubelet[2281]: I0625 16:19:52.595547 2281 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/922df580-a65a-49ff-9335-9d705be62ed6-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "922df580-a65a-49ff-9335-9d705be62ed6" (UID: "922df580-a65a-49ff-9335-9d705be62ed6"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 16:19:52.595936 kubelet[2281]: I0625 16:19:52.595558 2281 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-var-lib-calico\") on node \"localhost\" DevicePath \"\"" Jun 25 16:19:52.595936 kubelet[2281]: I0625 16:19:52.595574 2281 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "922df580-a65a-49ff-9335-9d705be62ed6" (UID: "922df580-a65a-49ff-9335-9d705be62ed6"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:19:52.595936 kubelet[2281]: I0625 16:19:52.595594 2281 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-cni-net-dir\") on node \"localhost\" DevicePath \"\"" Jun 25 16:19:52.595936 kubelet[2281]: I0625 16:19:52.595654 2281 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "922df580-a65a-49ff-9335-9d705be62ed6" (UID: "922df580-a65a-49ff-9335-9d705be62ed6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:19:52.596049 kubelet[2281]: I0625 16:19:52.595619 2281 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "922df580-a65a-49ff-9335-9d705be62ed6" (UID: "922df580-a65a-49ff-9335-9d705be62ed6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:19:52.596049 kubelet[2281]: I0625 16:19:52.595639 2281 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-policysync" (OuterVolumeSpecName: "policysync") pod "922df580-a65a-49ff-9335-9d705be62ed6" (UID: "922df580-a65a-49ff-9335-9d705be62ed6"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:19:52.596049 kubelet[2281]: I0625 16:19:52.595695 2281 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "922df580-a65a-49ff-9335-9d705be62ed6" (UID: "922df580-a65a-49ff-9335-9d705be62ed6"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 16:19:52.601736 kubelet[2281]: I0625 16:19:52.601657 2281 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/922df580-a65a-49ff-9335-9d705be62ed6-kube-api-access-2dm72" (OuterVolumeSpecName: "kube-api-access-2dm72") pod "922df580-a65a-49ff-9335-9d705be62ed6" (UID: "922df580-a65a-49ff-9335-9d705be62ed6"). InnerVolumeSpecName "kube-api-access-2dm72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 16:19:52.601975 kubelet[2281]: I0625 16:19:52.601941 2281 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/922df580-a65a-49ff-9335-9d705be62ed6-node-certs" (OuterVolumeSpecName: "node-certs") pod "922df580-a65a-49ff-9335-9d705be62ed6" (UID: "922df580-a65a-49ff-9335-9d705be62ed6"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 25 16:19:52.602554 systemd[1]: var-lib-kubelet-pods-922df580\x2da65a\x2d49ff\x2d9335\x2d9d705be62ed6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2dm72.mount: Deactivated successfully. Jun 25 16:19:52.602678 systemd[1]: var-lib-kubelet-pods-922df580\x2da65a\x2d49ff\x2d9335\x2d9d705be62ed6-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jun 25 16:19:52.696137 kubelet[2281]: I0625 16:19:52.696075 2281 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/922df580-a65a-49ff-9335-9d705be62ed6-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jun 25 16:19:52.696137 kubelet[2281]: I0625 16:19:52.696128 2281 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/922df580-a65a-49ff-9335-9d705be62ed6-node-certs\") on node \"localhost\" DevicePath \"\"" Jun 25 16:19:52.696137 kubelet[2281]: I0625 16:19:52.696143 2281 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-policysync\") on node \"localhost\" DevicePath \"\"" Jun 25 16:19:52.696137 kubelet[2281]: I0625 16:19:52.696156 2281 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jun 25 16:19:52.696492 kubelet[2281]: I0625 16:19:52.696169 2281 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" Jun 25 16:19:52.696492 kubelet[2281]: I0625 16:19:52.696184 2281 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-lib-modules\") on node \"localhost\" DevicePath \"\"" Jun 25 16:19:52.696492 kubelet[2281]: I0625 16:19:52.696196 2281 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2dm72\" (UniqueName: \"kubernetes.io/projected/922df580-a65a-49ff-9335-9d705be62ed6-kube-api-access-2dm72\") on node \"localhost\" DevicePath \"\"" Jun 25 16:19:52.696492 kubelet[2281]: I0625 16:19:52.696209 2281 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/922df580-a65a-49ff-9335-9d705be62ed6-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" Jun 25 16:19:53.418364 kubelet[2281]: E0625 16:19:53.418309 2281 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8xmb" podUID="77a47d09-249f-41aa-9f0e-6a405db06ba3" Jun 25 16:19:53.476279 kubelet[2281]: I0625 16:19:53.476244 2281 scope.go:117] "RemoveContainer" containerID="499d24b30b225b9d0bce5ddc59a4b92b57c6c45dc2b52368f5aa7c3dbe363623" Jun 25 16:19:53.520506 systemd[1]: Removed slice kubepods-besteffort-pod922df580_a65a_49ff_9335_9d705be62ed6.slice - libcontainer container kubepods-besteffort-pod922df580_a65a_49ff_9335_9d705be62ed6.slice. Jun 25 16:19:53.522915 containerd[1287]: time="2024-06-25T16:19:53.522879757Z" level=info msg="RemoveContainer for \"499d24b30b225b9d0bce5ddc59a4b92b57c6c45dc2b52368f5aa7c3dbe363623\"" Jun 25 16:19:53.766455 kubelet[2281]: I0625 16:19:53.765509 2281 topology_manager.go:215] "Topology Admit Handler" podUID="6d3453fa-cd53-4505-aae8-4aad792e9c3c" podNamespace="calico-system" podName="calico-node-t5cm9" Jun 25 16:19:53.772628 kubelet[2281]: E0625 16:19:53.772586 2281 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="922df580-a65a-49ff-9335-9d705be62ed6" containerName="flexvol-driver" Jun 25 16:19:53.772727 kubelet[2281]: I0625 16:19:53.772670 2281 memory_manager.go:346] "RemoveStaleState removing state" podUID="922df580-a65a-49ff-9335-9d705be62ed6" containerName="flexvol-driver" Jun 25 16:19:53.778267 systemd[1]: Created slice kubepods-besteffort-pod6d3453fa_cd53_4505_aae8_4aad792e9c3c.slice - libcontainer container kubepods-besteffort-pod6d3453fa_cd53_4505_aae8_4aad792e9c3c.slice. Jun 25 16:19:53.920518 kubelet[2281]: I0625 16:19:53.920478 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d3453fa-cd53-4505-aae8-4aad792e9c3c-lib-modules\") pod \"calico-node-t5cm9\" (UID: \"6d3453fa-cd53-4505-aae8-4aad792e9c3c\") " pod="calico-system/calico-node-t5cm9" Jun 25 16:19:53.920518 kubelet[2281]: I0625 16:19:53.920515 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6d3453fa-cd53-4505-aae8-4aad792e9c3c-node-certs\") pod \"calico-node-t5cm9\" (UID: \"6d3453fa-cd53-4505-aae8-4aad792e9c3c\") " pod="calico-system/calico-node-t5cm9" Jun 25 16:19:53.920713 kubelet[2281]: I0625 16:19:53.920532 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6d3453fa-cd53-4505-aae8-4aad792e9c3c-cni-net-dir\") pod \"calico-node-t5cm9\" (UID: \"6d3453fa-cd53-4505-aae8-4aad792e9c3c\") " pod="calico-system/calico-node-t5cm9" Jun 25 16:19:53.920713 kubelet[2281]: I0625 16:19:53.920553 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dhwh\" (UniqueName: \"kubernetes.io/projected/6d3453fa-cd53-4505-aae8-4aad792e9c3c-kube-api-access-9dhwh\") pod \"calico-node-t5cm9\" (UID: \"6d3453fa-cd53-4505-aae8-4aad792e9c3c\") " pod="calico-system/calico-node-t5cm9" Jun 25 16:19:53.920713 kubelet[2281]: I0625 16:19:53.920682 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d3453fa-cd53-4505-aae8-4aad792e9c3c-xtables-lock\") pod \"calico-node-t5cm9\" (UID: \"6d3453fa-cd53-4505-aae8-4aad792e9c3c\") " pod="calico-system/calico-node-t5cm9" Jun 25 16:19:53.920818 kubelet[2281]: I0625 16:19:53.920736 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d3453fa-cd53-4505-aae8-4aad792e9c3c-tigera-ca-bundle\") pod \"calico-node-t5cm9\" (UID: \"6d3453fa-cd53-4505-aae8-4aad792e9c3c\") " pod="calico-system/calico-node-t5cm9" Jun 25 16:19:53.920818 kubelet[2281]: I0625 16:19:53.920782 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6d3453fa-cd53-4505-aae8-4aad792e9c3c-cni-log-dir\") pod \"calico-node-t5cm9\" (UID: \"6d3453fa-cd53-4505-aae8-4aad792e9c3c\") " pod="calico-system/calico-node-t5cm9" Jun 25 16:19:53.920863 kubelet[2281]: I0625 16:19:53.920845 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6d3453fa-cd53-4505-aae8-4aad792e9c3c-flexvol-driver-host\") pod \"calico-node-t5cm9\" (UID: \"6d3453fa-cd53-4505-aae8-4aad792e9c3c\") " pod="calico-system/calico-node-t5cm9" Jun 25 16:19:53.920940 kubelet[2281]: I0625 16:19:53.920917 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6d3453fa-cd53-4505-aae8-4aad792e9c3c-var-lib-calico\") pod \"calico-node-t5cm9\" (UID: \"6d3453fa-cd53-4505-aae8-4aad792e9c3c\") " pod="calico-system/calico-node-t5cm9" Jun 25 16:19:53.920981 kubelet[2281]: I0625 16:19:53.920968 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6d3453fa-cd53-4505-aae8-4aad792e9c3c-policysync\") pod \"calico-node-t5cm9\" (UID: \"6d3453fa-cd53-4505-aae8-4aad792e9c3c\") " pod="calico-system/calico-node-t5cm9" Jun 25 16:19:53.921006 kubelet[2281]: I0625 16:19:53.920989 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6d3453fa-cd53-4505-aae8-4aad792e9c3c-var-run-calico\") pod \"calico-node-t5cm9\" (UID: \"6d3453fa-cd53-4505-aae8-4aad792e9c3c\") " pod="calico-system/calico-node-t5cm9" Jun 25 16:19:53.921033 kubelet[2281]: I0625 16:19:53.921012 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6d3453fa-cd53-4505-aae8-4aad792e9c3c-cni-bin-dir\") pod \"calico-node-t5cm9\" (UID: \"6d3453fa-cd53-4505-aae8-4aad792e9c3c\") " pod="calico-system/calico-node-t5cm9" Jun 25 16:19:53.986495 containerd[1287]: time="2024-06-25T16:19:53.986433080Z" level=info msg="RemoveContainer for \"499d24b30b225b9d0bce5ddc59a4b92b57c6c45dc2b52368f5aa7c3dbe363623\" returns successfully" Jun 25 16:19:54.090794 containerd[1287]: time="2024-06-25T16:19:54.090722899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:19:54.113574 containerd[1287]: time="2024-06-25T16:19:54.113503562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jun 25 16:19:54.132325 containerd[1287]: time="2024-06-25T16:19:54.132262154Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:19:54.147375 containerd[1287]: time="2024-06-25T16:19:54.147301431Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:19:54.154413 containerd[1287]: time="2024-06-25T16:19:54.154349565Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:19:54.155300 containerd[1287]: time="2024-06-25T16:19:54.155262228Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 2.598477387s" Jun 25 16:19:54.155382 containerd[1287]: time="2024-06-25T16:19:54.155305840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jun 25 16:19:54.168187 containerd[1287]: time="2024-06-25T16:19:54.168113328Z" level=info msg="CreateContainer within sandbox \"e09a51149c20357e7a054fe644e27ed45b4640e29f39d8ca164506861c9824d2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 16:19:54.226918 containerd[1287]: time="2024-06-25T16:19:54.226850714Z" level=info msg="CreateContainer within sandbox \"e09a51149c20357e7a054fe644e27ed45b4640e29f39d8ca164506861c9824d2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0c6ee4ff8a6a3113c2bf66c243b107adfef7f07ad00594cf2be96153046cc03e\"" Jun 25 16:19:54.228092 containerd[1287]: time="2024-06-25T16:19:54.228048714Z" level=info msg="StartContainer for \"0c6ee4ff8a6a3113c2bf66c243b107adfef7f07ad00594cf2be96153046cc03e\"" Jun 25 16:19:54.258402 systemd[1]: Started cri-containerd-0c6ee4ff8a6a3113c2bf66c243b107adfef7f07ad00594cf2be96153046cc03e.scope - libcontainer container 0c6ee4ff8a6a3113c2bf66c243b107adfef7f07ad00594cf2be96153046cc03e. Jun 25 16:19:54.271000 audit: BPF prog-id=128 op=LOAD Jun 25 16:19:54.272000 audit: BPF prog-id=129 op=LOAD Jun 25 16:19:54.272000 audit[2909]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2740 pid=2909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:54.272000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063366565346666386136613331313363326266363663323433623130 Jun 25 16:19:54.272000 audit: BPF prog-id=130 op=LOAD Jun 25 16:19:54.272000 audit[2909]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2740 pid=2909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:54.272000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063366565346666386136613331313363326266363663323433623130 Jun 25 16:19:54.272000 audit: BPF prog-id=130 op=UNLOAD Jun 25 16:19:54.272000 audit: BPF prog-id=129 op=UNLOAD Jun 25 16:19:54.272000 audit: BPF prog-id=131 op=LOAD Jun 25 16:19:54.272000 audit[2909]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=2740 pid=2909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:54.272000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063366565346666386136613331313363326266363663323433623130 Jun 25 16:19:54.307566 containerd[1287]: time="2024-06-25T16:19:54.307505011Z" level=info msg="StartContainer for \"0c6ee4ff8a6a3113c2bf66c243b107adfef7f07ad00594cf2be96153046cc03e\" returns successfully" Jun 25 16:19:54.380391 kubelet[2281]: E0625 16:19:54.380258 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:54.381125 containerd[1287]: time="2024-06-25T16:19:54.380973997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t5cm9,Uid:6d3453fa-cd53-4505-aae8-4aad792e9c3c,Namespace:calico-system,Attempt:0,}" Jun 25 16:19:54.460728 containerd[1287]: time="2024-06-25T16:19:54.460612798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:19:54.460728 containerd[1287]: time="2024-06-25T16:19:54.460683200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:19:54.460728 containerd[1287]: time="2024-06-25T16:19:54.460706774Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:19:54.460998 containerd[1287]: time="2024-06-25T16:19:54.460722674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:19:54.477410 systemd[1]: Started cri-containerd-5a6fac61f2e4d5e88b6e423ac55927ce2505f6e2429fed75f4c288da80a1ac10.scope - libcontainer container 5a6fac61f2e4d5e88b6e423ac55927ce2505f6e2429fed75f4c288da80a1ac10. Jun 25 16:19:54.479882 kubelet[2281]: E0625 16:19:54.479839 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:54.491000 audit: BPF prog-id=132 op=LOAD Jun 25 16:19:54.491000 audit: BPF prog-id=133 op=LOAD Jun 25 16:19:54.491000 audit[2956]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001b3988 a2=78 a3=0 items=0 ppid=2947 pid=2956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:54.491000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561366661633631663265346435653838623665343233616335353932 Jun 25 16:19:54.491000 audit: BPF prog-id=134 op=LOAD Jun 25 16:19:54.491000 audit[2956]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001b3720 a2=78 a3=0 items=0 ppid=2947 pid=2956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:54.491000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561366661633631663265346435653838623665343233616335353932 Jun 25 16:19:54.491000 audit: BPF prog-id=134 op=UNLOAD Jun 25 16:19:54.491000 audit: BPF prog-id=133 op=UNLOAD Jun 25 16:19:54.492000 audit: BPF prog-id=135 op=LOAD Jun 25 16:19:54.492000 audit[2956]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001b3be0 a2=78 a3=0 items=0 ppid=2947 pid=2956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:54.492000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561366661633631663265346435653838623665343233616335353932 Jun 25 16:19:54.503283 containerd[1287]: time="2024-06-25T16:19:54.503238703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t5cm9,Uid:6d3453fa-cd53-4505-aae8-4aad792e9c3c,Namespace:calico-system,Attempt:0,} returns sandbox id \"5a6fac61f2e4d5e88b6e423ac55927ce2505f6e2429fed75f4c288da80a1ac10\"" Jun 25 16:19:54.503931 kubelet[2281]: E0625 16:19:54.503905 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:54.505620 containerd[1287]: time="2024-06-25T16:19:54.505595158Z" level=info msg="CreateContainer within sandbox \"5a6fac61f2e4d5e88b6e423ac55927ce2505f6e2429fed75f4c288da80a1ac10\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 16:19:54.727923 containerd[1287]: time="2024-06-25T16:19:54.727766278Z" level=info msg="CreateContainer within sandbox \"5a6fac61f2e4d5e88b6e423ac55927ce2505f6e2429fed75f4c288da80a1ac10\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7cc5d4720f96d0a14ec1b24684dacb134fa5b63b8543f921e9a71a5ff6db9296\"" Jun 25 16:19:54.729157 containerd[1287]: time="2024-06-25T16:19:54.728598260Z" level=info msg="StartContainer for \"7cc5d4720f96d0a14ec1b24684dacb134fa5b63b8543f921e9a71a5ff6db9296\"" Jun 25 16:19:54.757665 systemd[1]: Started cri-containerd-7cc5d4720f96d0a14ec1b24684dacb134fa5b63b8543f921e9a71a5ff6db9296.scope - libcontainer container 7cc5d4720f96d0a14ec1b24684dacb134fa5b63b8543f921e9a71a5ff6db9296. Jun 25 16:19:54.771000 audit: BPF prog-id=136 op=LOAD Jun 25 16:19:54.771000 audit[2987]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2947 pid=2987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:54.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763633564343732306639366430613134656331623234363834646163 Jun 25 16:19:54.771000 audit: BPF prog-id=137 op=LOAD Jun 25 16:19:54.771000 audit[2987]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2947 pid=2987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:54.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763633564343732306639366430613134656331623234363834646163 Jun 25 16:19:54.771000 audit: BPF prog-id=137 op=UNLOAD Jun 25 16:19:54.771000 audit: BPF prog-id=136 op=UNLOAD Jun 25 16:19:54.771000 audit: BPF prog-id=138 op=LOAD Jun 25 16:19:54.771000 audit[2987]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2947 pid=2987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:54.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763633564343732306639366430613134656331623234363834646163 Jun 25 16:19:54.791146 containerd[1287]: time="2024-06-25T16:19:54.791101907Z" level=info msg="StartContainer for \"7cc5d4720f96d0a14ec1b24684dacb134fa5b63b8543f921e9a71a5ff6db9296\" returns successfully" Jun 25 16:19:54.801568 systemd[1]: cri-containerd-7cc5d4720f96d0a14ec1b24684dacb134fa5b63b8543f921e9a71a5ff6db9296.scope: Deactivated successfully. Jun 25 16:19:54.807000 audit: BPF prog-id=138 op=UNLOAD Jun 25 16:19:55.679633 kubelet[2281]: E0625 16:19:55.679572 2281 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8xmb" podUID="77a47d09-249f-41aa-9f0e-6a405db06ba3" Jun 25 16:19:55.683851 kubelet[2281]: I0625 16:19:55.683820 2281 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="922df580-a65a-49ff-9335-9d705be62ed6" path="/var/lib/kubelet/pods/922df580-a65a-49ff-9335-9d705be62ed6/volumes" Jun 25 16:19:55.685052 kubelet[2281]: I0625 16:19:55.685036 2281 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:19:55.685469 kubelet[2281]: E0625 16:19:55.685440 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:55.685682 kubelet[2281]: E0625 16:19:55.685539 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:55.897855 kubelet[2281]: I0625 16:19:55.897809 2281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-589d585b6d-s2bjj" podStartSLOduration=2.94349756 podCreationTimestamp="2024-06-25 16:19:49 +0000 UTC" firstStartedPulling="2024-06-25 16:19:50.201901117 +0000 UTC m=+26.862944969" lastFinishedPulling="2024-06-25 16:19:54.155558004 +0000 UTC m=+30.816601856" observedRunningTime="2024-06-25 16:19:54.515390409 +0000 UTC m=+31.176434261" watchObservedRunningTime="2024-06-25 16:19:55.897154447 +0000 UTC m=+32.558198299" Jun 25 16:19:55.928989 containerd[1287]: time="2024-06-25T16:19:55.928909144Z" level=info msg="shim disconnected" id=7cc5d4720f96d0a14ec1b24684dacb134fa5b63b8543f921e9a71a5ff6db9296 namespace=k8s.io Jun 25 16:19:55.928989 containerd[1287]: time="2024-06-25T16:19:55.928973225Z" level=warning msg="cleaning up after shim disconnected" id=7cc5d4720f96d0a14ec1b24684dacb134fa5b63b8543f921e9a71a5ff6db9296 namespace=k8s.io Jun 25 16:19:55.928989 containerd[1287]: time="2024-06-25T16:19:55.928983694Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:19:56.688566 kubelet[2281]: E0625 16:19:56.688538 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:19:56.692462 containerd[1287]: time="2024-06-25T16:19:56.692430118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 16:19:57.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.61:22-10.0.0.1:58380 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:57.156734 systemd[1]: Started sshd@7-10.0.0.61:22-10.0.0.1:58380.service - OpenSSH per-connection server daemon (10.0.0.1:58380). Jun 25 16:19:57.159004 kernel: kauditd_printk_skb: 40 callbacks suppressed Jun 25 16:19:57.159059 kernel: audit: type=1130 audit(1719332397.155:502): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.61:22-10.0.0.1:58380 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:57.190000 audit[3043]: USER_ACCT pid=3043 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:19:57.192358 sshd[3043]: Accepted publickey for core from 10.0.0.1 port 58380 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:19:57.193561 sshd[3043]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:19:57.191000 audit[3043]: CRED_ACQ pid=3043 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:19:57.202013 kernel: audit: type=1101 audit(1719332397.190:503): pid=3043 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:19:57.202129 kernel: audit: type=1103 audit(1719332397.191:504): pid=3043 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:19:57.202165 kernel: audit: type=1006 audit(1719332397.191:505): pid=3043 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Jun 25 16:19:57.191000 audit[3043]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff6055920 a2=3 a3=7f37c3ea4480 items=0 ppid=1 pid=3043 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:57.206345 kernel: audit: type=1300 audit(1719332397.191:505): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff6055920 a2=3 a3=7f37c3ea4480 items=0 ppid=1 pid=3043 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:19:57.191000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:19:57.208260 kernel: audit: type=1327 audit(1719332397.191:505): proctitle=737368643A20636F7265205B707269765D Jun 25 16:19:57.208814 systemd-logind[1271]: New session 8 of user core. Jun 25 16:19:57.214401 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 16:19:57.217000 audit[3043]: USER_START pid=3043 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:19:57.219000 audit[3045]: CRED_ACQ pid=3045 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:19:57.226564 kernel: audit: type=1105 audit(1719332397.217:506): pid=3043 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:19:57.226646 kernel: audit: type=1103 audit(1719332397.219:507): pid=3045 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:19:57.327254 sshd[3043]: pam_unix(sshd:session): session closed for user core Jun 25 16:19:57.326000 audit[3043]: USER_END pid=3043 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:19:57.329826 systemd[1]: sshd@7-10.0.0.61:22-10.0.0.1:58380.service: Deactivated successfully. Jun 25 16:19:57.330694 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 16:19:57.331308 systemd-logind[1271]: Session 8 logged out. Waiting for processes to exit. Jun 25 16:19:57.332026 systemd-logind[1271]: Removed session 8. Jun 25 16:19:57.327000 audit[3043]: CRED_DISP pid=3043 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:19:57.335753 kernel: audit: type=1106 audit(1719332397.326:508): pid=3043 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:19:57.335891 kernel: audit: type=1104 audit(1719332397.327:509): pid=3043 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:19:57.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.61:22-10.0.0.1:58380 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:19:57.420946 kubelet[2281]: E0625 16:19:57.420838 2281 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8xmb" podUID="77a47d09-249f-41aa-9f0e-6a405db06ba3" Jun 25 16:19:59.417679 kubelet[2281]: E0625 16:19:59.417626 2281 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8xmb" podUID="77a47d09-249f-41aa-9f0e-6a405db06ba3" Jun 25 16:20:01.418119 kubelet[2281]: E0625 16:20:01.417623 2281 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8xmb" podUID="77a47d09-249f-41aa-9f0e-6a405db06ba3" Jun 25 16:20:02.171891 containerd[1287]: time="2024-06-25T16:20:02.171816989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:02.172819 containerd[1287]: time="2024-06-25T16:20:02.172774956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jun 25 16:20:02.174322 containerd[1287]: time="2024-06-25T16:20:02.174273538Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:02.176492 containerd[1287]: time="2024-06-25T16:20:02.176460692Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:02.179269 containerd[1287]: time="2024-06-25T16:20:02.179206214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:02.180169 containerd[1287]: time="2024-06-25T16:20:02.180122353Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 5.487651398s" Jun 25 16:20:02.180169 containerd[1287]: time="2024-06-25T16:20:02.180162529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jun 25 16:20:02.193970 containerd[1287]: time="2024-06-25T16:20:02.193912038Z" level=info msg="CreateContainer within sandbox \"5a6fac61f2e4d5e88b6e423ac55927ce2505f6e2429fed75f4c288da80a1ac10\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 16:20:02.213726 containerd[1287]: time="2024-06-25T16:20:02.213671104Z" level=info msg="CreateContainer within sandbox \"5a6fac61f2e4d5e88b6e423ac55927ce2505f6e2429fed75f4c288da80a1ac10\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1558408d82194405d544a1b847ad65e30156302fb5d7f66bd211d4609dd9f2e9\"" Jun 25 16:20:02.214380 containerd[1287]: time="2024-06-25T16:20:02.214328958Z" level=info msg="StartContainer for \"1558408d82194405d544a1b847ad65e30156302fb5d7f66bd211d4609dd9f2e9\"" Jun 25 16:20:02.244787 systemd[1]: Started cri-containerd-1558408d82194405d544a1b847ad65e30156302fb5d7f66bd211d4609dd9f2e9.scope - libcontainer container 1558408d82194405d544a1b847ad65e30156302fb5d7f66bd211d4609dd9f2e9. Jun 25 16:20:02.259000 audit: BPF prog-id=139 op=LOAD Jun 25 16:20:02.262133 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:20:02.262194 kernel: audit: type=1334 audit(1719332402.259:511): prog-id=139 op=LOAD Jun 25 16:20:02.262226 kernel: audit: type=1300 audit(1719332402.259:511): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=2947 pid=3072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:02.259000 audit[3072]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=2947 pid=3072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:02.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135353834303864383231393434303564353434613162383437616436 Jun 25 16:20:02.269290 kernel: audit: type=1327 audit(1719332402.259:511): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135353834303864383231393434303564353434613162383437616436 Jun 25 16:20:02.269375 kernel: audit: type=1334 audit(1719332402.259:512): prog-id=140 op=LOAD Jun 25 16:20:02.259000 audit: BPF prog-id=140 op=LOAD Jun 25 16:20:02.259000 audit[3072]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=2947 pid=3072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:02.273892 kernel: audit: type=1300 audit(1719332402.259:512): arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=2947 pid=3072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:02.273949 kernel: audit: type=1327 audit(1719332402.259:512): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135353834303864383231393434303564353434613162383437616436 Jun 25 16:20:02.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135353834303864383231393434303564353434613162383437616436 Jun 25 16:20:02.259000 audit: BPF prog-id=140 op=UNLOAD Jun 25 16:20:02.278807 kernel: audit: type=1334 audit(1719332402.259:513): prog-id=140 op=UNLOAD Jun 25 16:20:02.278890 kernel: audit: type=1334 audit(1719332402.259:514): prog-id=139 op=UNLOAD Jun 25 16:20:02.259000 audit: BPF prog-id=139 op=UNLOAD Jun 25 16:20:02.259000 audit: BPF prog-id=141 op=LOAD Jun 25 16:20:02.280684 kernel: audit: type=1334 audit(1719332402.259:515): prog-id=141 op=LOAD Jun 25 16:20:02.280732 kernel: audit: type=1300 audit(1719332402.259:515): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=2947 pid=3072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:02.259000 audit[3072]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=2947 pid=3072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:02.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135353834303864383231393434303564353434613162383437616436 Jun 25 16:20:02.339356 systemd[1]: Started sshd@8-10.0.0.61:22-10.0.0.1:58392.service - OpenSSH per-connection server daemon (10.0.0.1:58392). Jun 25 16:20:02.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.61:22-10.0.0.1:58392 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:02.428760 containerd[1287]: time="2024-06-25T16:20:02.428548252Z" level=info msg="StartContainer for \"1558408d82194405d544a1b847ad65e30156302fb5d7f66bd211d4609dd9f2e9\" returns successfully" Jun 25 16:20:02.447000 audit[3101]: USER_ACCT pid=3101 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:02.449435 sshd[3101]: Accepted publickey for core from 10.0.0.1 port 58392 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:20:02.448000 audit[3101]: CRED_ACQ pid=3101 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:02.448000 audit[3101]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe2756b90 a2=3 a3=7fae647b6480 items=0 ppid=1 pid=3101 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:02.448000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:20:02.450081 sshd[3101]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:20:02.454405 systemd-logind[1271]: New session 9 of user core. Jun 25 16:20:02.464461 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 16:20:02.467000 audit[3101]: USER_START pid=3101 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:02.469000 audit[3103]: CRED_ACQ pid=3103 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:02.957733 kubelet[2281]: E0625 16:20:02.957682 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:20:02.985976 sshd[3101]: pam_unix(sshd:session): session closed for user core Jun 25 16:20:02.985000 audit[3101]: USER_END pid=3101 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:02.986000 audit[3101]: CRED_DISP pid=3101 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:02.989523 systemd[1]: sshd@8-10.0.0.61:22-10.0.0.1:58392.service: Deactivated successfully. Jun 25 16:20:02.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.61:22-10.0.0.1:58392 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:02.990342 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 16:20:02.991185 systemd-logind[1271]: Session 9 logged out. Waiting for processes to exit. Jun 25 16:20:02.992067 systemd-logind[1271]: Removed session 9. Jun 25 16:20:03.416822 kubelet[2281]: E0625 16:20:03.416780 2281 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8xmb" podUID="77a47d09-249f-41aa-9f0e-6a405db06ba3" Jun 25 16:20:04.315316 containerd[1287]: time="2024-06-25T16:20:04.315250047Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 16:20:04.317759 systemd[1]: cri-containerd-1558408d82194405d544a1b847ad65e30156302fb5d7f66bd211d4609dd9f2e9.scope: Deactivated successfully. Jun 25 16:20:04.322000 audit: BPF prog-id=141 op=UNLOAD Jun 25 16:20:04.336646 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1558408d82194405d544a1b847ad65e30156302fb5d7f66bd211d4609dd9f2e9-rootfs.mount: Deactivated successfully. Jun 25 16:20:04.364439 containerd[1287]: time="2024-06-25T16:20:04.364353495Z" level=info msg="shim disconnected" id=1558408d82194405d544a1b847ad65e30156302fb5d7f66bd211d4609dd9f2e9 namespace=k8s.io Jun 25 16:20:04.364439 containerd[1287]: time="2024-06-25T16:20:04.364425219Z" level=warning msg="cleaning up after shim disconnected" id=1558408d82194405d544a1b847ad65e30156302fb5d7f66bd211d4609dd9f2e9 namespace=k8s.io Jun 25 16:20:04.364439 containerd[1287]: time="2024-06-25T16:20:04.364440658Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 16:20:04.384874 kubelet[2281]: I0625 16:20:04.384840 2281 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jun 25 16:20:04.406604 kubelet[2281]: I0625 16:20:04.406544 2281 topology_manager.go:215] "Topology Admit Handler" podUID="e35b0eb9-faf1-4363-bfed-20a104ae884f" podNamespace="kube-system" podName="coredns-5dd5756b68-xhllt" Jun 25 16:20:04.406793 kubelet[2281]: I0625 16:20:04.406773 2281 topology_manager.go:215] "Topology Admit Handler" podUID="1c3a0a88-0373-4ee8-9c8d-f0282229e96f" podNamespace="kube-system" podName="coredns-5dd5756b68-hfnk2" Jun 25 16:20:04.407675 kubelet[2281]: I0625 16:20:04.407252 2281 topology_manager.go:215] "Topology Admit Handler" podUID="8c346c41-8c1c-4ee1-9f01-b8cf91cb6c91" podNamespace="calico-system" podName="calico-kube-controllers-5688959d9c-2jz59" Jun 25 16:20:04.413911 systemd[1]: Created slice kubepods-burstable-pode35b0eb9_faf1_4363_bfed_20a104ae884f.slice - libcontainer container kubepods-burstable-pode35b0eb9_faf1_4363_bfed_20a104ae884f.slice. Jun 25 16:20:04.420070 systemd[1]: Created slice kubepods-besteffort-pod8c346c41_8c1c_4ee1_9f01_b8cf91cb6c91.slice - libcontainer container kubepods-besteffort-pod8c346c41_8c1c_4ee1_9f01_b8cf91cb6c91.slice. Jun 25 16:20:04.423601 systemd[1]: Created slice kubepods-burstable-pod1c3a0a88_0373_4ee8_9c8d_f0282229e96f.slice - libcontainer container kubepods-burstable-pod1c3a0a88_0373_4ee8_9c8d_f0282229e96f.slice. Jun 25 16:20:04.559990 kubelet[2281]: I0625 16:20:04.559932 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2ksl\" (UniqueName: \"kubernetes.io/projected/8c346c41-8c1c-4ee1-9f01-b8cf91cb6c91-kube-api-access-g2ksl\") pod \"calico-kube-controllers-5688959d9c-2jz59\" (UID: \"8c346c41-8c1c-4ee1-9f01-b8cf91cb6c91\") " pod="calico-system/calico-kube-controllers-5688959d9c-2jz59" Jun 25 16:20:04.559990 kubelet[2281]: I0625 16:20:04.560000 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl5lp\" (UniqueName: \"kubernetes.io/projected/1c3a0a88-0373-4ee8-9c8d-f0282229e96f-kube-api-access-pl5lp\") pod \"coredns-5dd5756b68-hfnk2\" (UID: \"1c3a0a88-0373-4ee8-9c8d-f0282229e96f\") " pod="kube-system/coredns-5dd5756b68-hfnk2" Jun 25 16:20:04.560281 kubelet[2281]: I0625 16:20:04.560134 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c3a0a88-0373-4ee8-9c8d-f0282229e96f-config-volume\") pod \"coredns-5dd5756b68-hfnk2\" (UID: \"1c3a0a88-0373-4ee8-9c8d-f0282229e96f\") " pod="kube-system/coredns-5dd5756b68-hfnk2" Jun 25 16:20:04.560281 kubelet[2281]: I0625 16:20:04.560178 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e35b0eb9-faf1-4363-bfed-20a104ae884f-config-volume\") pod \"coredns-5dd5756b68-xhllt\" (UID: \"e35b0eb9-faf1-4363-bfed-20a104ae884f\") " pod="kube-system/coredns-5dd5756b68-xhllt" Jun 25 16:20:04.560281 kubelet[2281]: I0625 16:20:04.560198 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c346c41-8c1c-4ee1-9f01-b8cf91cb6c91-tigera-ca-bundle\") pod \"calico-kube-controllers-5688959d9c-2jz59\" (UID: \"8c346c41-8c1c-4ee1-9f01-b8cf91cb6c91\") " pod="calico-system/calico-kube-controllers-5688959d9c-2jz59" Jun 25 16:20:04.560388 kubelet[2281]: I0625 16:20:04.560360 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flxpg\" (UniqueName: \"kubernetes.io/projected/e35b0eb9-faf1-4363-bfed-20a104ae884f-kube-api-access-flxpg\") pod \"coredns-5dd5756b68-xhllt\" (UID: \"e35b0eb9-faf1-4363-bfed-20a104ae884f\") " pod="kube-system/coredns-5dd5756b68-xhllt" Jun 25 16:20:04.718185 kubelet[2281]: E0625 16:20:04.718144 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:20:04.718761 containerd[1287]: time="2024-06-25T16:20:04.718721049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-xhllt,Uid:e35b0eb9-faf1-4363-bfed-20a104ae884f,Namespace:kube-system,Attempt:0,}" Jun 25 16:20:04.722286 containerd[1287]: time="2024-06-25T16:20:04.722257003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5688959d9c-2jz59,Uid:8c346c41-8c1c-4ee1-9f01-b8cf91cb6c91,Namespace:calico-system,Attempt:0,}" Jun 25 16:20:04.726533 kubelet[2281]: E0625 16:20:04.726506 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:20:04.726807 containerd[1287]: time="2024-06-25T16:20:04.726780440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-hfnk2,Uid:1c3a0a88-0373-4ee8-9c8d-f0282229e96f,Namespace:kube-system,Attempt:0,}" Jun 25 16:20:04.851713 containerd[1287]: time="2024-06-25T16:20:04.851608192Z" level=error msg="Failed to destroy network for sandbox \"9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:04.852059 containerd[1287]: time="2024-06-25T16:20:04.852013573Z" level=error msg="encountered an error cleaning up failed sandbox \"9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:04.852158 containerd[1287]: time="2024-06-25T16:20:04.852118810Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-xhllt,Uid:e35b0eb9-faf1-4363-bfed-20a104ae884f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:04.852509 kubelet[2281]: E0625 16:20:04.852473 2281 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:04.852602 kubelet[2281]: E0625 16:20:04.852554 2281 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-xhllt" Jun 25 16:20:04.852602 kubelet[2281]: E0625 16:20:04.852578 2281 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-xhllt" Jun 25 16:20:04.852674 kubelet[2281]: E0625 16:20:04.852642 2281 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-xhllt_kube-system(e35b0eb9-faf1-4363-bfed-20a104ae884f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-xhllt_kube-system(e35b0eb9-faf1-4363-bfed-20a104ae884f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-xhllt" podUID="e35b0eb9-faf1-4363-bfed-20a104ae884f" Jun 25 16:20:04.862600 containerd[1287]: time="2024-06-25T16:20:04.862530905Z" level=error msg="Failed to destroy network for sandbox \"7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:04.862997 containerd[1287]: time="2024-06-25T16:20:04.862964649Z" level=error msg="encountered an error cleaning up failed sandbox \"7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:04.863060 containerd[1287]: time="2024-06-25T16:20:04.863024251Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5688959d9c-2jz59,Uid:8c346c41-8c1c-4ee1-9f01-b8cf91cb6c91,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:04.863321 kubelet[2281]: E0625 16:20:04.863270 2281 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:04.863402 kubelet[2281]: E0625 16:20:04.863337 2281 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5688959d9c-2jz59" Jun 25 16:20:04.863402 kubelet[2281]: E0625 16:20:04.863361 2281 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5688959d9c-2jz59" Jun 25 16:20:04.863488 kubelet[2281]: E0625 16:20:04.863412 2281 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5688959d9c-2jz59_calico-system(8c346c41-8c1c-4ee1-9f01-b8cf91cb6c91)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5688959d9c-2jz59_calico-system(8c346c41-8c1c-4ee1-9f01-b8cf91cb6c91)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5688959d9c-2jz59" podUID="8c346c41-8c1c-4ee1-9f01-b8cf91cb6c91" Jun 25 16:20:04.864226 containerd[1287]: time="2024-06-25T16:20:04.864145183Z" level=error msg="Failed to destroy network for sandbox \"8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:04.864588 containerd[1287]: time="2024-06-25T16:20:04.864554702Z" level=error msg="encountered an error cleaning up failed sandbox \"8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:04.864643 containerd[1287]: time="2024-06-25T16:20:04.864616919Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-hfnk2,Uid:1c3a0a88-0373-4ee8-9c8d-f0282229e96f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:04.864855 kubelet[2281]: E0625 16:20:04.864799 2281 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:04.864855 kubelet[2281]: E0625 16:20:04.864841 2281 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-hfnk2" Jun 25 16:20:04.864961 kubelet[2281]: E0625 16:20:04.864867 2281 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-hfnk2" Jun 25 16:20:04.864961 kubelet[2281]: E0625 16:20:04.864922 2281 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-hfnk2_kube-system(1c3a0a88-0373-4ee8-9c8d-f0282229e96f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-hfnk2_kube-system(1c3a0a88-0373-4ee8-9c8d-f0282229e96f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-hfnk2" podUID="1c3a0a88-0373-4ee8-9c8d-f0282229e96f" Jun 25 16:20:04.962156 kubelet[2281]: I0625 16:20:04.962108 2281 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" Jun 25 16:20:04.962754 containerd[1287]: time="2024-06-25T16:20:04.962713046Z" level=info msg="StopPodSandbox for \"7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa\"" Jun 25 16:20:04.962977 containerd[1287]: time="2024-06-25T16:20:04.962947287Z" level=info msg="Ensure that sandbox 7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa in task-service has been cleanup successfully" Jun 25 16:20:04.966199 kubelet[2281]: I0625 16:20:04.965736 2281 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" Jun 25 16:20:04.966578 containerd[1287]: time="2024-06-25T16:20:04.966535989Z" level=info msg="StopPodSandbox for \"8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09\"" Jun 25 16:20:04.966801 containerd[1287]: time="2024-06-25T16:20:04.966771532Z" level=info msg="Ensure that sandbox 8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09 in task-service has been cleanup successfully" Jun 25 16:20:04.967876 kubelet[2281]: I0625 16:20:04.967843 2281 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" Jun 25 16:20:04.968509 containerd[1287]: time="2024-06-25T16:20:04.968407631Z" level=info msg="StopPodSandbox for \"9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce\"" Jun 25 16:20:04.969804 containerd[1287]: time="2024-06-25T16:20:04.969779434Z" level=info msg="Ensure that sandbox 9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce in task-service has been cleanup successfully" Jun 25 16:20:04.970865 kubelet[2281]: E0625 16:20:04.970835 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:20:04.973543 containerd[1287]: time="2024-06-25T16:20:04.973476782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 16:20:05.000189 containerd[1287]: time="2024-06-25T16:20:05.000093560Z" level=error msg="StopPodSandbox for \"7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa\" failed" error="failed to destroy network for sandbox \"7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:05.000470 kubelet[2281]: E0625 16:20:05.000424 2281 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" Jun 25 16:20:05.000558 kubelet[2281]: E0625 16:20:05.000526 2281 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa"} Jun 25 16:20:05.000611 kubelet[2281]: E0625 16:20:05.000574 2281 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8c346c41-8c1c-4ee1-9f01-b8cf91cb6c91\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:20:05.000611 kubelet[2281]: E0625 16:20:05.000611 2281 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8c346c41-8c1c-4ee1-9f01-b8cf91cb6c91\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5688959d9c-2jz59" podUID="8c346c41-8c1c-4ee1-9f01-b8cf91cb6c91" Jun 25 16:20:05.010486 containerd[1287]: time="2024-06-25T16:20:05.010422218Z" level=error msg="StopPodSandbox for \"9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce\" failed" error="failed to destroy network for sandbox \"9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:05.010832 kubelet[2281]: E0625 16:20:05.010806 2281 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" Jun 25 16:20:05.010946 kubelet[2281]: E0625 16:20:05.010849 2281 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce"} Jun 25 16:20:05.010946 kubelet[2281]: E0625 16:20:05.010909 2281 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e35b0eb9-faf1-4363-bfed-20a104ae884f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:20:05.010946 kubelet[2281]: E0625 16:20:05.010946 2281 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e35b0eb9-faf1-4363-bfed-20a104ae884f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-xhllt" podUID="e35b0eb9-faf1-4363-bfed-20a104ae884f" Jun 25 16:20:05.014171 containerd[1287]: time="2024-06-25T16:20:05.014110657Z" level=error msg="StopPodSandbox for \"8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09\" failed" error="failed to destroy network for sandbox \"8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:05.014433 kubelet[2281]: E0625 16:20:05.014410 2281 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" Jun 25 16:20:05.014488 kubelet[2281]: E0625 16:20:05.014437 2281 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09"} Jun 25 16:20:05.014488 kubelet[2281]: E0625 16:20:05.014478 2281 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1c3a0a88-0373-4ee8-9c8d-f0282229e96f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:20:05.014552 kubelet[2281]: E0625 16:20:05.014509 2281 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1c3a0a88-0373-4ee8-9c8d-f0282229e96f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-hfnk2" podUID="1c3a0a88-0373-4ee8-9c8d-f0282229e96f" Jun 25 16:20:05.337376 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce-shm.mount: Deactivated successfully. Jun 25 16:20:05.425954 systemd[1]: Created slice kubepods-besteffort-pod77a47d09_249f_41aa_9f0e_6a405db06ba3.slice - libcontainer container kubepods-besteffort-pod77a47d09_249f_41aa_9f0e_6a405db06ba3.slice. Jun 25 16:20:05.428076 containerd[1287]: time="2024-06-25T16:20:05.428034841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v8xmb,Uid:77a47d09-249f-41aa-9f0e-6a405db06ba3,Namespace:calico-system,Attempt:0,}" Jun 25 16:20:05.483136 containerd[1287]: time="2024-06-25T16:20:05.483009360Z" level=error msg="Failed to destroy network for sandbox \"1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:05.483511 containerd[1287]: time="2024-06-25T16:20:05.483470926Z" level=error msg="encountered an error cleaning up failed sandbox \"1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:05.483574 containerd[1287]: time="2024-06-25T16:20:05.483537972Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v8xmb,Uid:77a47d09-249f-41aa-9f0e-6a405db06ba3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:05.483869 kubelet[2281]: E0625 16:20:05.483813 2281 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:05.484177 kubelet[2281]: E0625 16:20:05.483876 2281 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v8xmb" Jun 25 16:20:05.484177 kubelet[2281]: E0625 16:20:05.483897 2281 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v8xmb" Jun 25 16:20:05.484177 kubelet[2281]: E0625 16:20:05.483957 2281 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-v8xmb_calico-system(77a47d09-249f-41aa-9f0e-6a405db06ba3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-v8xmb_calico-system(77a47d09-249f-41aa-9f0e-6a405db06ba3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v8xmb" podUID="77a47d09-249f-41aa-9f0e-6a405db06ba3" Jun 25 16:20:05.485303 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2-shm.mount: Deactivated successfully. Jun 25 16:20:05.973917 kubelet[2281]: I0625 16:20:05.973875 2281 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" Jun 25 16:20:05.974498 containerd[1287]: time="2024-06-25T16:20:05.974451406Z" level=info msg="StopPodSandbox for \"1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2\"" Jun 25 16:20:05.974747 containerd[1287]: time="2024-06-25T16:20:05.974704140Z" level=info msg="Ensure that sandbox 1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2 in task-service has been cleanup successfully" Jun 25 16:20:05.999188 containerd[1287]: time="2024-06-25T16:20:05.999107764Z" level=error msg="StopPodSandbox for \"1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2\" failed" error="failed to destroy network for sandbox \"1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 16:20:05.999486 kubelet[2281]: E0625 16:20:05.999455 2281 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" Jun 25 16:20:05.999552 kubelet[2281]: E0625 16:20:05.999512 2281 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2"} Jun 25 16:20:05.999584 kubelet[2281]: E0625 16:20:05.999557 2281 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"77a47d09-249f-41aa-9f0e-6a405db06ba3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 16:20:05.999667 kubelet[2281]: E0625 16:20:05.999600 2281 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"77a47d09-249f-41aa-9f0e-6a405db06ba3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v8xmb" podUID="77a47d09-249f-41aa-9f0e-6a405db06ba3" Jun 25 16:20:07.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.61:22-10.0.0.1:55554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:08.000267 systemd[1]: Started sshd@9-10.0.0.61:22-10.0.0.1:55554.service - OpenSSH per-connection server daemon (10.0.0.1:55554). Jun 25 16:20:08.030615 kernel: kauditd_printk_skb: 13 callbacks suppressed Jun 25 16:20:08.030761 kernel: audit: type=1130 audit(1719332407.999:526): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.61:22-10.0.0.1:55554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:08.059000 audit[3395]: USER_ACCT pid=3395 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:08.060614 sshd[3395]: Accepted publickey for core from 10.0.0.1 port 55554 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:20:08.062030 sshd[3395]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:20:08.060000 audit[3395]: CRED_ACQ pid=3395 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:08.066425 systemd-logind[1271]: New session 10 of user core. Jun 25 16:20:08.083535 kernel: audit: type=1101 audit(1719332408.059:527): pid=3395 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:08.083565 kernel: audit: type=1103 audit(1719332408.060:528): pid=3395 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:08.083579 kernel: audit: type=1006 audit(1719332408.060:529): pid=3395 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jun 25 16:20:08.083594 kernel: audit: type=1300 audit(1719332408.060:529): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd8d91b890 a2=3 a3=7f86a4315480 items=0 ppid=1 pid=3395 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:08.083610 kernel: audit: type=1327 audit(1719332408.060:529): proctitle=737368643A20636F7265205B707269765D Jun 25 16:20:08.060000 audit[3395]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd8d91b890 a2=3 a3=7f86a4315480 items=0 ppid=1 pid=3395 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:08.060000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:20:08.083570 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 16:20:08.087000 audit[3395]: USER_START pid=3395 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:08.089000 audit[3397]: CRED_ACQ pid=3397 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:08.097073 kernel: audit: type=1105 audit(1719332408.087:530): pid=3395 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:08.097124 kernel: audit: type=1103 audit(1719332408.089:531): pid=3397 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:08.229991 sshd[3395]: pam_unix(sshd:session): session closed for user core Jun 25 16:20:08.229000 audit[3395]: USER_END pid=3395 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:08.233239 systemd[1]: sshd@9-10.0.0.61:22-10.0.0.1:55554.service: Deactivated successfully. Jun 25 16:20:08.234112 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 16:20:08.234757 systemd-logind[1271]: Session 10 logged out. Waiting for processes to exit. Jun 25 16:20:08.235540 systemd-logind[1271]: Removed session 10. Jun 25 16:20:08.229000 audit[3395]: CRED_DISP pid=3395 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:08.240638 kernel: audit: type=1106 audit(1719332408.229:532): pid=3395 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:08.240704 kernel: audit: type=1104 audit(1719332408.229:533): pid=3395 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:08.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.61:22-10.0.0.1:55554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:12.437674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3598821997.mount: Deactivated successfully. Jun 25 16:20:12.811770 containerd[1287]: time="2024-06-25T16:20:12.811620828Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:12.814414 containerd[1287]: time="2024-06-25T16:20:12.814347782Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jun 25 16:20:12.816458 containerd[1287]: time="2024-06-25T16:20:12.816368003Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:12.818748 containerd[1287]: time="2024-06-25T16:20:12.818701321Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:12.821257 containerd[1287]: time="2024-06-25T16:20:12.821184491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:12.821939 containerd[1287]: time="2024-06-25T16:20:12.821866264Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 7.848343355s" Jun 25 16:20:12.821939 containerd[1287]: time="2024-06-25T16:20:12.821920600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jun 25 16:20:12.831899 containerd[1287]: time="2024-06-25T16:20:12.831850033Z" level=info msg="CreateContainer within sandbox \"5a6fac61f2e4d5e88b6e423ac55927ce2505f6e2429fed75f4c288da80a1ac10\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 16:20:12.850989 containerd[1287]: time="2024-06-25T16:20:12.850933684Z" level=info msg="CreateContainer within sandbox \"5a6fac61f2e4d5e88b6e423ac55927ce2505f6e2429fed75f4c288da80a1ac10\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f0444e0eda8b2f41597478d270cb1584c3b514040d4ffec728f6788e44eb0ca3\"" Jun 25 16:20:12.851557 containerd[1287]: time="2024-06-25T16:20:12.851518549Z" level=info msg="StartContainer for \"f0444e0eda8b2f41597478d270cb1584c3b514040d4ffec728f6788e44eb0ca3\"" Jun 25 16:20:12.912383 systemd[1]: Started cri-containerd-f0444e0eda8b2f41597478d270cb1584c3b514040d4ffec728f6788e44eb0ca3.scope - libcontainer container f0444e0eda8b2f41597478d270cb1584c3b514040d4ffec728f6788e44eb0ca3. Jun 25 16:20:12.923000 audit: BPF prog-id=142 op=LOAD Jun 25 16:20:12.923000 audit[3424]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2947 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:12.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630343434653065646138623266343135393734373864323730636231 Jun 25 16:20:12.923000 audit: BPF prog-id=143 op=LOAD Jun 25 16:20:12.923000 audit[3424]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2947 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:12.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630343434653065646138623266343135393734373864323730636231 Jun 25 16:20:12.923000 audit: BPF prog-id=143 op=UNLOAD Jun 25 16:20:12.923000 audit: BPF prog-id=142 op=UNLOAD Jun 25 16:20:12.923000 audit: BPF prog-id=144 op=LOAD Jun 25 16:20:12.923000 audit[3424]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2947 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:12.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630343434653065646138623266343135393734373864323730636231 Jun 25 16:20:12.944197 containerd[1287]: time="2024-06-25T16:20:12.944113424Z" level=info msg="StartContainer for \"f0444e0eda8b2f41597478d270cb1584c3b514040d4ffec728f6788e44eb0ca3\" returns successfully" Jun 25 16:20:12.990622 kubelet[2281]: E0625 16:20:12.990592 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:20:13.015406 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 16:20:13.015539 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 16:20:13.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.61:22-10.0.0.1:55570 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:13.241890 systemd[1]: Started sshd@10-10.0.0.61:22-10.0.0.1:55570.service - OpenSSH per-connection server daemon (10.0.0.1:55570). Jun 25 16:20:13.243145 kernel: kauditd_printk_skb: 12 callbacks suppressed Jun 25 16:20:13.243288 kernel: audit: type=1130 audit(1719332413.240:540): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.61:22-10.0.0.1:55570 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:13.271000 audit[3486]: USER_ACCT pid=3486 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:13.272561 sshd[3486]: Accepted publickey for core from 10.0.0.1 port 55570 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:20:13.283025 sshd[3486]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:20:13.272000 audit[3486]: CRED_ACQ pid=3486 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:13.287821 systemd-logind[1271]: New session 11 of user core. Jun 25 16:20:13.290965 kernel: audit: type=1101 audit(1719332413.271:541): pid=3486 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:13.291074 kernel: audit: type=1103 audit(1719332413.272:542): pid=3486 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:13.291100 kernel: audit: type=1006 audit(1719332413.272:543): pid=3486 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jun 25 16:20:13.293393 kernel: audit: type=1300 audit(1719332413.272:543): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc80f36400 a2=3 a3=7ffa9676d480 items=0 ppid=1 pid=3486 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:13.272000 audit[3486]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc80f36400 a2=3 a3=7ffa9676d480 items=0 ppid=1 pid=3486 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:13.272000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:20:13.299233 kernel: audit: type=1327 audit(1719332413.272:543): proctitle=737368643A20636F7265205B707269765D Jun 25 16:20:13.301419 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 16:20:13.304000 audit[3486]: USER_START pid=3486 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:13.306000 audit[3488]: CRED_ACQ pid=3488 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:13.321069 kernel: audit: type=1105 audit(1719332413.304:544): pid=3486 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:13.321120 kernel: audit: type=1103 audit(1719332413.306:545): pid=3488 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:13.475902 sshd[3486]: pam_unix(sshd:session): session closed for user core Jun 25 16:20:13.475000 audit[3486]: USER_END pid=3486 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:13.478418 systemd[1]: sshd@10-10.0.0.61:22-10.0.0.1:55570.service: Deactivated successfully. Jun 25 16:20:13.479195 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 16:20:13.479997 systemd-logind[1271]: Session 11 logged out. Waiting for processes to exit. Jun 25 16:20:13.480813 systemd-logind[1271]: Removed session 11. Jun 25 16:20:13.475000 audit[3486]: CRED_DISP pid=3486 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:13.484588 kernel: audit: type=1106 audit(1719332413.475:546): pid=3486 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:13.484665 kernel: audit: type=1104 audit(1719332413.475:547): pid=3486 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:13.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.61:22-10.0.0.1:55570 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:13.992649 kubelet[2281]: E0625 16:20:13.992602 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:20:14.010731 systemd[1]: run-containerd-runc-k8s.io-f0444e0eda8b2f41597478d270cb1584c3b514040d4ffec728f6788e44eb0ca3-runc.erBCi8.mount: Deactivated successfully. Jun 25 16:20:14.626000 audit[3601]: AVC avc: denied { write } for pid=3601 comm="tee" name="fd" dev="proc" ino=24430 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:20:14.626000 audit[3601]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe240f6a2f a2=241 a3=1b6 items=1 ppid=3555 pid=3601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:14.626000 audit: CWD cwd="/etc/service/enabled/bird/log" Jun 25 16:20:14.626000 audit: PATH item=0 name="/dev/fd/63" inode=26034 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:20:14.626000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:20:14.630000 audit[3605]: AVC avc: denied { write } for pid=3605 comm="tee" name="fd" dev="proc" ino=24434 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:20:14.630000 audit[3605]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc9eccaa2e a2=241 a3=1b6 items=1 ppid=3556 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:14.630000 audit: CWD cwd="/etc/service/enabled/confd/log" Jun 25 16:20:14.630000 audit: PATH item=0 name="/dev/fd/63" inode=25026 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:20:14.630000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:20:14.632000 audit[3617]: AVC avc: denied { write } for pid=3617 comm="tee" name="fd" dev="proc" ino=24438 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:20:14.632000 audit[3617]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffca7cb8a1e a2=241 a3=1b6 items=1 ppid=3550 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:14.632000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jun 25 16:20:14.632000 audit: PATH item=0 name="/dev/fd/63" inode=25035 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:20:14.632000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:20:14.634000 audit[3611]: AVC avc: denied { write } for pid=3611 comm="tee" name="fd" dev="proc" ino=26040 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:20:14.634000 audit[3611]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffecd53ca2e a2=241 a3=1b6 items=1 ppid=3551 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:14.634000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jun 25 16:20:14.634000 audit: PATH item=0 name="/dev/fd/63" inode=25029 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:20:14.634000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:20:14.644000 audit[3613]: AVC avc: denied { write } for pid=3613 comm="tee" name="fd" dev="proc" ino=26978 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:20:14.646000 audit[3607]: AVC avc: denied { write } for pid=3607 comm="tee" name="fd" dev="proc" ino=26981 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:20:14.654000 audit[3626]: AVC avc: denied { write } for pid=3626 comm="tee" name="fd" dev="proc" ino=25040 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 16:20:14.654000 audit[3626]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff826fba2e a2=241 a3=1b6 items=1 ppid=3562 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:14.654000 audit: CWD cwd="/etc/service/enabled/felix/log" Jun 25 16:20:14.654000 audit: PATH item=0 name="/dev/fd/63" inode=26975 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:20:14.654000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:20:14.644000 audit[3613]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcceb82a30 a2=241 a3=1b6 items=1 ppid=3549 pid=3613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:14.644000 audit: CWD cwd="/etc/service/enabled/cni/log" Jun 25 16:20:14.644000 audit: PATH item=0 name="/dev/fd/63" inode=25032 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:20:14.644000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:20:14.646000 audit[3607]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc0e70ea1f a2=241 a3=1b6 items=1 ppid=3560 pid=3607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:14.646000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jun 25 16:20:14.646000 audit: PATH item=0 name="/dev/fd/63" inode=26037 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 16:20:14.646000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 16:20:15.987445 kubelet[2281]: I0625 16:20:15.987386 2281 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 16:20:15.988123 kubelet[2281]: E0625 16:20:15.988103 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:20:15.995582 kubelet[2281]: E0625 16:20:15.995562 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:20:16.094792 kubelet[2281]: I0625 16:20:16.094730 2281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-t5cm9" podStartSLOduration=6.961568605 podCreationTimestamp="2024-06-25 16:19:53 +0000 UTC" firstStartedPulling="2024-06-25 16:19:56.689167122 +0000 UTC m=+33.350210974" lastFinishedPulling="2024-06-25 16:20:12.822268545 +0000 UTC m=+49.483312397" observedRunningTime="2024-06-25 16:20:13.023093412 +0000 UTC m=+49.684137274" watchObservedRunningTime="2024-06-25 16:20:16.094670028 +0000 UTC m=+52.755713880" Jun 25 16:20:16.383000 audit[2107]: AVC avc: denied { watch } for pid=2107 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6269 scontext=system_u:system_r:container_t:s0:c382,c828 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:20:16.383000 audit[2107]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c00166f180 a2=fc6 a3=0 items=0 ppid=1985 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c382,c828 key=(null) Jun 25 16:20:16.383000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:20:16.383000 audit[2107]: AVC avc: denied { watch } for pid=2107 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6275 scontext=system_u:system_r:container_t:s0:c382,c828 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:20:16.383000 audit[2107]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=c a1=c000ef1c50 a2=fc6 a3=0 items=0 ppid=1985 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c382,c828 key=(null) Jun 25 16:20:16.383000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:20:16.393000 audit[3665]: NETFILTER_CFG table=filter:97 family=2 entries=15 op=nft_register_rule pid=3665 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:16.393000 audit[3665]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7fff1e587b10 a2=0 a3=7fff1e587afc items=0 ppid=2426 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:16.393000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:16.394000 audit[3665]: NETFILTER_CFG table=nat:98 family=2 entries=19 op=nft_register_chain pid=3665 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:16.394000 audit[3665]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff1e587b10 a2=0 a3=7fff1e587afc items=0 ppid=2426 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:16.394000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:16.417776 containerd[1287]: time="2024-06-25T16:20:16.417708335Z" level=info msg="StopPodSandbox for \"7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa\"" Jun 25 16:20:16.528398 containerd[1287]: 2024-06-25 16:20:16.464 [INFO][3682] k8s.go 608: Cleaning up netns ContainerID="7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" Jun 25 16:20:16.528398 containerd[1287]: 2024-06-25 16:20:16.466 [INFO][3682] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" iface="eth0" netns="/var/run/netns/cni-abec5609-7651-6f3b-add3-7aa5bcff03f6" Jun 25 16:20:16.528398 containerd[1287]: 2024-06-25 16:20:16.466 [INFO][3682] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" iface="eth0" netns="/var/run/netns/cni-abec5609-7651-6f3b-add3-7aa5bcff03f6" Jun 25 16:20:16.528398 containerd[1287]: 2024-06-25 16:20:16.466 [INFO][3682] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" iface="eth0" netns="/var/run/netns/cni-abec5609-7651-6f3b-add3-7aa5bcff03f6" Jun 25 16:20:16.528398 containerd[1287]: 2024-06-25 16:20:16.466 [INFO][3682] k8s.go 615: Releasing IP address(es) ContainerID="7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" Jun 25 16:20:16.528398 containerd[1287]: 2024-06-25 16:20:16.466 [INFO][3682] utils.go 188: Calico CNI releasing IP address ContainerID="7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" Jun 25 16:20:16.528398 containerd[1287]: 2024-06-25 16:20:16.512 [INFO][3689] ipam_plugin.go 411: Releasing address using handleID ContainerID="7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" HandleID="k8s-pod-network.7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" Workload="localhost-k8s-calico--kube--controllers--5688959d9c--2jz59-eth0" Jun 25 16:20:16.528398 containerd[1287]: 2024-06-25 16:20:16.513 [INFO][3689] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:20:16.528398 containerd[1287]: 2024-06-25 16:20:16.513 [INFO][3689] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:20:16.528398 containerd[1287]: 2024-06-25 16:20:16.523 [WARNING][3689] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" HandleID="k8s-pod-network.7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" Workload="localhost-k8s-calico--kube--controllers--5688959d9c--2jz59-eth0" Jun 25 16:20:16.528398 containerd[1287]: 2024-06-25 16:20:16.523 [INFO][3689] ipam_plugin.go 439: Releasing address using workloadID ContainerID="7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" HandleID="k8s-pod-network.7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" Workload="localhost-k8s-calico--kube--controllers--5688959d9c--2jz59-eth0" Jun 25 16:20:16.528398 containerd[1287]: 2024-06-25 16:20:16.525 [INFO][3689] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:20:16.528398 containerd[1287]: 2024-06-25 16:20:16.526 [INFO][3682] k8s.go 621: Teardown processing complete. ContainerID="7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" Jun 25 16:20:16.529123 containerd[1287]: time="2024-06-25T16:20:16.528711812Z" level=info msg="TearDown network for sandbox \"7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa\" successfully" Jun 25 16:20:16.529123 containerd[1287]: time="2024-06-25T16:20:16.528752310Z" level=info msg="StopPodSandbox for \"7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa\" returns successfully" Jun 25 16:20:16.529833 containerd[1287]: time="2024-06-25T16:20:16.529781480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5688959d9c-2jz59,Uid:8c346c41-8c1c-4ee1-9f01-b8cf91cb6c91,Namespace:calico-system,Attempt:1,}" Jun 25 16:20:16.531433 systemd[1]: run-netns-cni\x2dabec5609\x2d7651\x2d6f3b\x2dadd3\x2d7aa5bcff03f6.mount: Deactivated successfully. Jun 25 16:20:16.959845 systemd-networkd[1115]: vxlan.calico: Link UP Jun 25 16:20:16.959855 systemd-networkd[1115]: vxlan.calico: Gained carrier Jun 25 16:20:16.973000 audit: BPF prog-id=145 op=LOAD Jun 25 16:20:16.973000 audit[3784]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc040bc080 a2=70 a3=7f3f12501000 items=0 ppid=3705 pid=3784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:16.973000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:20:16.973000 audit: BPF prog-id=145 op=UNLOAD Jun 25 16:20:16.973000 audit: BPF prog-id=146 op=LOAD Jun 25 16:20:16.973000 audit[3784]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc040bc080 a2=70 a3=6f items=0 ppid=3705 pid=3784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:16.973000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:20:16.973000 audit: BPF prog-id=146 op=UNLOAD Jun 25 16:20:16.973000 audit: BPF prog-id=147 op=LOAD Jun 25 16:20:16.973000 audit[3784]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc040bc010 a2=70 a3=7ffc040bc080 items=0 ppid=3705 pid=3784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:16.973000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:20:16.973000 audit: BPF prog-id=147 op=UNLOAD Jun 25 16:20:16.974000 audit: BPF prog-id=148 op=LOAD Jun 25 16:20:16.974000 audit[3784]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc040bc040 a2=70 a3=0 items=0 ppid=3705 pid=3784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:16.974000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 16:20:16.988000 audit: BPF prog-id=148 op=UNLOAD Jun 25 16:20:17.044000 audit[3815]: NETFILTER_CFG table=mangle:99 family=2 entries=16 op=nft_register_chain pid=3815 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:20:17.044000 audit[3815]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffcc646aab0 a2=0 a3=7ffcc646aa9c items=0 ppid=3705 pid=3815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:17.044000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:20:17.117000 audit[3819]: NETFILTER_CFG table=raw:100 family=2 entries=19 op=nft_register_chain pid=3819 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:20:17.117000 audit[3819]: SYSCALL arch=c000003e syscall=46 success=yes exit=6992 a0=3 a1=7ffd29ad8960 a2=0 a3=7ffd29ad894c items=0 ppid=3705 pid=3819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:17.117000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:20:17.119000 audit[3817]: NETFILTER_CFG table=nat:101 family=2 entries=15 op=nft_register_chain pid=3817 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:20:17.119000 audit[3817]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffffc9a9ce0 a2=0 a3=7ffffc9a9ccc items=0 ppid=3705 pid=3817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:17.119000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:20:17.120000 audit[3818]: NETFILTER_CFG table=filter:102 family=2 entries=39 op=nft_register_chain pid=3818 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:20:17.120000 audit[3818]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7ffc20e28dd0 a2=0 a3=7ffc20e28dbc items=0 ppid=3705 pid=3818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:17.120000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:20:17.357512 systemd-networkd[1115]: caliea4f0d68d82: Link UP Jun 25 16:20:17.359017 systemd-networkd[1115]: caliea4f0d68d82: Gained carrier Jun 25 16:20:17.359248 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliea4f0d68d82: link becomes ready Jun 25 16:20:17.409693 containerd[1287]: 2024-06-25 16:20:17.266 [INFO][3827] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5688959d9c--2jz59-eth0 calico-kube-controllers-5688959d9c- calico-system 8c346c41-8c1c-4ee1-9f01-b8cf91cb6c91 840 0 2024-06-25 16:19:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5688959d9c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5688959d9c-2jz59 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliea4f0d68d82 [] []}} ContainerID="fcccaf64449e300c5864f24f1ae673fa3436eb57ee8eb9264a3d5e81a65d7202" Namespace="calico-system" Pod="calico-kube-controllers-5688959d9c-2jz59" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5688959d9c--2jz59-" Jun 25 16:20:17.409693 containerd[1287]: 2024-06-25 16:20:17.266 [INFO][3827] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fcccaf64449e300c5864f24f1ae673fa3436eb57ee8eb9264a3d5e81a65d7202" Namespace="calico-system" Pod="calico-kube-controllers-5688959d9c-2jz59" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5688959d9c--2jz59-eth0" Jun 25 16:20:17.409693 containerd[1287]: 2024-06-25 16:20:17.307 [INFO][3841] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fcccaf64449e300c5864f24f1ae673fa3436eb57ee8eb9264a3d5e81a65d7202" HandleID="k8s-pod-network.fcccaf64449e300c5864f24f1ae673fa3436eb57ee8eb9264a3d5e81a65d7202" Workload="localhost-k8s-calico--kube--controllers--5688959d9c--2jz59-eth0" Jun 25 16:20:17.409693 containerd[1287]: 2024-06-25 16:20:17.317 [INFO][3841] ipam_plugin.go 264: Auto assigning IP ContainerID="fcccaf64449e300c5864f24f1ae673fa3436eb57ee8eb9264a3d5e81a65d7202" HandleID="k8s-pod-network.fcccaf64449e300c5864f24f1ae673fa3436eb57ee8eb9264a3d5e81a65d7202" Workload="localhost-k8s-calico--kube--controllers--5688959d9c--2jz59-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000134440), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5688959d9c-2jz59", "timestamp":"2024-06-25 16:20:17.307906735 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:20:17.409693 containerd[1287]: 2024-06-25 16:20:17.317 [INFO][3841] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:20:17.409693 containerd[1287]: 2024-06-25 16:20:17.317 [INFO][3841] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:20:17.409693 containerd[1287]: 2024-06-25 16:20:17.318 [INFO][3841] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:20:17.409693 containerd[1287]: 2024-06-25 16:20:17.320 [INFO][3841] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fcccaf64449e300c5864f24f1ae673fa3436eb57ee8eb9264a3d5e81a65d7202" host="localhost" Jun 25 16:20:17.409693 containerd[1287]: 2024-06-25 16:20:17.332 [INFO][3841] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:20:17.409693 containerd[1287]: 2024-06-25 16:20:17.337 [INFO][3841] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:20:17.409693 containerd[1287]: 2024-06-25 16:20:17.339 [INFO][3841] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:20:17.409693 containerd[1287]: 2024-06-25 16:20:17.341 [INFO][3841] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:20:17.409693 containerd[1287]: 2024-06-25 16:20:17.341 [INFO][3841] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fcccaf64449e300c5864f24f1ae673fa3436eb57ee8eb9264a3d5e81a65d7202" host="localhost" Jun 25 16:20:17.409693 containerd[1287]: 2024-06-25 16:20:17.343 [INFO][3841] ipam.go 1685: Creating new handle: k8s-pod-network.fcccaf64449e300c5864f24f1ae673fa3436eb57ee8eb9264a3d5e81a65d7202 Jun 25 16:20:17.409693 containerd[1287]: 2024-06-25 16:20:17.347 [INFO][3841] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fcccaf64449e300c5864f24f1ae673fa3436eb57ee8eb9264a3d5e81a65d7202" host="localhost" Jun 25 16:20:17.409693 containerd[1287]: 2024-06-25 16:20:17.353 [INFO][3841] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.fcccaf64449e300c5864f24f1ae673fa3436eb57ee8eb9264a3d5e81a65d7202" host="localhost" Jun 25 16:20:17.409693 containerd[1287]: 2024-06-25 16:20:17.353 [INFO][3841] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.fcccaf64449e300c5864f24f1ae673fa3436eb57ee8eb9264a3d5e81a65d7202" host="localhost" Jun 25 16:20:17.409693 containerd[1287]: 2024-06-25 16:20:17.353 [INFO][3841] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:20:17.409693 containerd[1287]: 2024-06-25 16:20:17.353 [INFO][3841] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="fcccaf64449e300c5864f24f1ae673fa3436eb57ee8eb9264a3d5e81a65d7202" HandleID="k8s-pod-network.fcccaf64449e300c5864f24f1ae673fa3436eb57ee8eb9264a3d5e81a65d7202" Workload="localhost-k8s-calico--kube--controllers--5688959d9c--2jz59-eth0" Jun 25 16:20:17.410478 containerd[1287]: 2024-06-25 16:20:17.355 [INFO][3827] k8s.go 386: Populated endpoint ContainerID="fcccaf64449e300c5864f24f1ae673fa3436eb57ee8eb9264a3d5e81a65d7202" Namespace="calico-system" Pod="calico-kube-controllers-5688959d9c-2jz59" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5688959d9c--2jz59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5688959d9c--2jz59-eth0", GenerateName:"calico-kube-controllers-5688959d9c-", Namespace:"calico-system", SelfLink:"", UID:"8c346c41-8c1c-4ee1-9f01-b8cf91cb6c91", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 19, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5688959d9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5688959d9c-2jz59", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliea4f0d68d82", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:20:17.410478 containerd[1287]: 2024-06-25 16:20:17.355 [INFO][3827] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="fcccaf64449e300c5864f24f1ae673fa3436eb57ee8eb9264a3d5e81a65d7202" Namespace="calico-system" Pod="calico-kube-controllers-5688959d9c-2jz59" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5688959d9c--2jz59-eth0" Jun 25 16:20:17.410478 containerd[1287]: 2024-06-25 16:20:17.355 [INFO][3827] dataplane_linux.go 68: Setting the host side veth name to caliea4f0d68d82 ContainerID="fcccaf64449e300c5864f24f1ae673fa3436eb57ee8eb9264a3d5e81a65d7202" Namespace="calico-system" Pod="calico-kube-controllers-5688959d9c-2jz59" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5688959d9c--2jz59-eth0" Jun 25 16:20:17.410478 containerd[1287]: 2024-06-25 16:20:17.359 [INFO][3827] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="fcccaf64449e300c5864f24f1ae673fa3436eb57ee8eb9264a3d5e81a65d7202" Namespace="calico-system" Pod="calico-kube-controllers-5688959d9c-2jz59" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5688959d9c--2jz59-eth0" Jun 25 16:20:17.410478 containerd[1287]: 2024-06-25 16:20:17.359 [INFO][3827] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fcccaf64449e300c5864f24f1ae673fa3436eb57ee8eb9264a3d5e81a65d7202" Namespace="calico-system" Pod="calico-kube-controllers-5688959d9c-2jz59" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5688959d9c--2jz59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5688959d9c--2jz59-eth0", GenerateName:"calico-kube-controllers-5688959d9c-", Namespace:"calico-system", SelfLink:"", UID:"8c346c41-8c1c-4ee1-9f01-b8cf91cb6c91", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 19, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5688959d9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fcccaf64449e300c5864f24f1ae673fa3436eb57ee8eb9264a3d5e81a65d7202", Pod:"calico-kube-controllers-5688959d9c-2jz59", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliea4f0d68d82", MAC:"a6:83:de:3d:4b:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:20:17.410478 containerd[1287]: 2024-06-25 16:20:17.407 [INFO][3827] k8s.go 500: Wrote updated endpoint to datastore ContainerID="fcccaf64449e300c5864f24f1ae673fa3436eb57ee8eb9264a3d5e81a65d7202" Namespace="calico-system" Pod="calico-kube-controllers-5688959d9c-2jz59" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5688959d9c--2jz59-eth0" Jun 25 16:20:17.420000 audit[3861]: NETFILTER_CFG table=filter:103 family=2 entries=34 op=nft_register_chain pid=3861 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:20:17.420000 audit[3861]: SYSCALL arch=c000003e syscall=46 success=yes exit=19148 a0=3 a1=7ffefe1b1c00 a2=0 a3=7ffefe1b1bec items=0 ppid=3705 pid=3861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:17.420000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:20:17.541161 containerd[1287]: time="2024-06-25T16:20:17.541012238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:20:17.541161 containerd[1287]: time="2024-06-25T16:20:17.541087453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:20:17.541161 containerd[1287]: time="2024-06-25T16:20:17.541109775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:20:17.541161 containerd[1287]: time="2024-06-25T16:20:17.541125396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:20:17.562484 systemd[1]: Started cri-containerd-fcccaf64449e300c5864f24f1ae673fa3436eb57ee8eb9264a3d5e81a65d7202.scope - libcontainer container fcccaf64449e300c5864f24f1ae673fa3436eb57ee8eb9264a3d5e81a65d7202. Jun 25 16:20:17.572000 audit: BPF prog-id=149 op=LOAD Jun 25 16:20:17.573000 audit: BPF prog-id=150 op=LOAD Jun 25 16:20:17.573000 audit[3880]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3869 pid=3880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:17.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663636361663634343439653330306335383634663234663161653637 Jun 25 16:20:17.573000 audit: BPF prog-id=151 op=LOAD Jun 25 16:20:17.573000 audit[3880]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3869 pid=3880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:17.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663636361663634343439653330306335383634663234663161653637 Jun 25 16:20:17.573000 audit: BPF prog-id=151 op=UNLOAD Jun 25 16:20:17.573000 audit: BPF prog-id=150 op=UNLOAD Jun 25 16:20:17.573000 audit: BPF prog-id=152 op=LOAD Jun 25 16:20:17.573000 audit[3880]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3869 pid=3880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:17.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663636361663634343439653330306335383634663234663161653637 Jun 25 16:20:17.573993 systemd-resolved[1226]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:20:17.612998 containerd[1287]: time="2024-06-25T16:20:17.612869941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5688959d9c-2jz59,Uid:8c346c41-8c1c-4ee1-9f01-b8cf91cb6c91,Namespace:calico-system,Attempt:1,} returns sandbox id \"fcccaf64449e300c5864f24f1ae673fa3436eb57ee8eb9264a3d5e81a65d7202\"" Jun 25 16:20:17.615650 containerd[1287]: time="2024-06-25T16:20:17.615595729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 16:20:17.782000 audit[2169]: AVC avc: denied { watch } for pid=2169 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6275 scontext=system_u:system_r:container_t:s0:c79,c171 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:20:17.782000 audit[2169]: AVC avc: denied { watch } for pid=2169 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6269 scontext=system_u:system_r:container_t:s0:c79,c171 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:20:17.782000 audit[2169]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6c a1=c01321eb40 a2=fc6 a3=0 items=0 ppid=1979 pid=2169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c79,c171 key=(null) Jun 25 16:20:17.782000 audit[2169]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6d a1=c0129d79e0 a2=fc6 a3=0 items=0 ppid=1979 pid=2169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c79,c171 key=(null) Jun 25 16:20:17.782000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3631002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 16:20:17.782000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3631002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 16:20:17.785000 audit[2169]: AVC avc: denied { watch } for pid=2169 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=6271 scontext=system_u:system_r:container_t:s0:c79,c171 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:20:17.785000 audit[2169]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6a a1=c0132aa150 a2=fc6 a3=0 items=0 ppid=1979 pid=2169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c79,c171 key=(null) Jun 25 16:20:17.785000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3631002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 16:20:17.793000 audit[2169]: AVC avc: denied { watch } for pid=2169 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=6275 scontext=system_u:system_r:container_t:s0:c79,c171 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:20:17.793000 audit[2169]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=69 a1=c0130fe7b0 a2=fc6 a3=0 items=0 ppid=1979 pid=2169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c79,c171 key=(null) Jun 25 16:20:17.793000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3631002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 16:20:17.795000 audit[2169]: AVC avc: denied { watch } for pid=2169 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=6277 scontext=system_u:system_r:container_t:s0:c79,c171 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:20:17.795000 audit[2169]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6b a1=c0130fe8a0 a2=fc6 a3=0 items=0 ppid=1979 pid=2169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c79,c171 key=(null) Jun 25 16:20:17.795000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3631002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 16:20:17.812000 audit[2169]: AVC avc: denied { watch } for pid=2169 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6269 scontext=system_u:system_r:container_t:s0:c79,c171 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:20:17.812000 audit[2169]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6a a1=c013091b60 a2=fc6 a3=0 items=0 ppid=1979 pid=2169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c79,c171 key=(null) Jun 25 16:20:17.812000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3631002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jun 25 16:20:18.343388 systemd-networkd[1115]: vxlan.calico: Gained IPv6LL Jun 25 16:20:18.417731 containerd[1287]: time="2024-06-25T16:20:18.417674465Z" level=info msg="StopPodSandbox for \"1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2\"" Jun 25 16:20:18.484761 containerd[1287]: 2024-06-25 16:20:18.453 [INFO][3922] k8s.go 608: Cleaning up netns ContainerID="1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" Jun 25 16:20:18.484761 containerd[1287]: 2024-06-25 16:20:18.453 [INFO][3922] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" iface="eth0" netns="/var/run/netns/cni-3b9e2596-913a-d48a-755d-ad21b3094aee" Jun 25 16:20:18.484761 containerd[1287]: 2024-06-25 16:20:18.454 [INFO][3922] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" iface="eth0" netns="/var/run/netns/cni-3b9e2596-913a-d48a-755d-ad21b3094aee" Jun 25 16:20:18.484761 containerd[1287]: 2024-06-25 16:20:18.454 [INFO][3922] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" iface="eth0" netns="/var/run/netns/cni-3b9e2596-913a-d48a-755d-ad21b3094aee" Jun 25 16:20:18.484761 containerd[1287]: 2024-06-25 16:20:18.454 [INFO][3922] k8s.go 615: Releasing IP address(es) ContainerID="1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" Jun 25 16:20:18.484761 containerd[1287]: 2024-06-25 16:20:18.454 [INFO][3922] utils.go 188: Calico CNI releasing IP address ContainerID="1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" Jun 25 16:20:18.484761 containerd[1287]: 2024-06-25 16:20:18.474 [INFO][3929] ipam_plugin.go 411: Releasing address using handleID ContainerID="1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" HandleID="k8s-pod-network.1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" Workload="localhost-k8s-csi--node--driver--v8xmb-eth0" Jun 25 16:20:18.484761 containerd[1287]: 2024-06-25 16:20:18.474 [INFO][3929] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:20:18.484761 containerd[1287]: 2024-06-25 16:20:18.475 [INFO][3929] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:20:18.484761 containerd[1287]: 2024-06-25 16:20:18.480 [WARNING][3929] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" HandleID="k8s-pod-network.1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" Workload="localhost-k8s-csi--node--driver--v8xmb-eth0" Jun 25 16:20:18.484761 containerd[1287]: 2024-06-25 16:20:18.481 [INFO][3929] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" HandleID="k8s-pod-network.1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" Workload="localhost-k8s-csi--node--driver--v8xmb-eth0" Jun 25 16:20:18.484761 containerd[1287]: 2024-06-25 16:20:18.482 [INFO][3929] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:20:18.484761 containerd[1287]: 2024-06-25 16:20:18.483 [INFO][3922] k8s.go 621: Teardown processing complete. ContainerID="1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" Jun 25 16:20:18.485357 containerd[1287]: time="2024-06-25T16:20:18.485316859Z" level=info msg="TearDown network for sandbox \"1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2\" successfully" Jun 25 16:20:18.485432 containerd[1287]: time="2024-06-25T16:20:18.485417292Z" level=info msg="StopPodSandbox for \"1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2\" returns successfully" Jun 25 16:20:18.486166 containerd[1287]: time="2024-06-25T16:20:18.486139236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v8xmb,Uid:77a47d09-249f-41aa-9f0e-6a405db06ba3,Namespace:calico-system,Attempt:1,}" Jun 25 16:20:18.487907 systemd[1]: run-netns-cni\x2d3b9e2596\x2d913a\x2dd48a\x2d755d\x2dad21b3094aee.mount: Deactivated successfully. Jun 25 16:20:18.495542 systemd[1]: Started sshd@11-10.0.0.61:22-10.0.0.1:39394.service - OpenSSH per-connection server daemon (10.0.0.1:39394). Jun 25 16:20:18.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.61:22-10.0.0.1:39394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:18.496610 kernel: kauditd_printk_skb: 109 callbacks suppressed Jun 25 16:20:18.496663 kernel: audit: type=1130 audit(1719332418.495:585): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.61:22-10.0.0.1:39394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:18.526000 audit[3937]: USER_ACCT pid=3937 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:18.526795 sshd[3937]: Accepted publickey for core from 10.0.0.1 port 39394 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:20:18.533590 sshd[3937]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:20:18.527000 audit[3937]: CRED_ACQ pid=3937 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:18.538008 systemd-logind[1271]: New session 12 of user core. Jun 25 16:20:18.539179 kernel: audit: type=1101 audit(1719332418.526:586): pid=3937 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:18.539246 kernel: audit: type=1103 audit(1719332418.527:587): pid=3937 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:18.527000 audit[3937]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff0c61d3a0 a2=3 a3=7f059934f480 items=0 ppid=1 pid=3937 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:18.544165 kernel: audit: type=1006 audit(1719332418.527:588): pid=3937 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jun 25 16:20:18.544236 kernel: audit: type=1300 audit(1719332418.527:588): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff0c61d3a0 a2=3 a3=7f059934f480 items=0 ppid=1 pid=3937 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:18.544267 kernel: audit: type=1327 audit(1719332418.527:588): proctitle=737368643A20636F7265205B707269765D Jun 25 16:20:18.527000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:20:18.547392 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 16:20:18.551000 audit[3937]: USER_START pid=3937 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:18.552000 audit[3939]: CRED_ACQ pid=3939 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:18.567618 kernel: audit: type=1105 audit(1719332418.551:589): pid=3937 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:18.567688 kernel: audit: type=1103 audit(1719332418.552:590): pid=3939 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:18.676005 sshd[3937]: pam_unix(sshd:session): session closed for user core Jun 25 16:20:18.678000 audit[3937]: USER_END pid=3937 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:18.678000 audit[3937]: CRED_DISP pid=3937 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:18.685633 kernel: audit: type=1106 audit(1719332418.678:591): pid=3937 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:18.685700 kernel: audit: type=1104 audit(1719332418.678:592): pid=3937 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:18.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.61:22-10.0.0.1:39394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:18.690358 systemd[1]: sshd@11-10.0.0.61:22-10.0.0.1:39394.service: Deactivated successfully. Jun 25 16:20:18.691143 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 16:20:18.691934 systemd-logind[1271]: Session 12 logged out. Waiting for processes to exit. Jun 25 16:20:18.695648 systemd-networkd[1115]: cali28b0a5fe1a6: Link UP Jun 25 16:20:18.697291 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:20:18.697418 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali28b0a5fe1a6: link becomes ready Jun 25 16:20:18.697525 systemd-networkd[1115]: cali28b0a5fe1a6: Gained carrier Jun 25 16:20:18.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.61:22-10.0.0.1:39406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:18.698777 systemd[1]: Started sshd@12-10.0.0.61:22-10.0.0.1:39406.service - OpenSSH per-connection server daemon (10.0.0.1:39406). Jun 25 16:20:18.704354 systemd-logind[1271]: Removed session 12. Jun 25 16:20:18.712014 containerd[1287]: 2024-06-25 16:20:18.622 [INFO][3940] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--v8xmb-eth0 csi-node-driver- calico-system 77a47d09-249f-41aa-9f0e-6a405db06ba3 855 0 2024-06-25 16:19:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-v8xmb eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali28b0a5fe1a6 [] []}} ContainerID="db82eb17e8c4994b3ae46565906ee83b464247dd57ddf3a08d35a7d5a08b753f" Namespace="calico-system" Pod="csi-node-driver-v8xmb" WorkloadEndpoint="localhost-k8s-csi--node--driver--v8xmb-" Jun 25 16:20:18.712014 containerd[1287]: 2024-06-25 16:20:18.622 [INFO][3940] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="db82eb17e8c4994b3ae46565906ee83b464247dd57ddf3a08d35a7d5a08b753f" Namespace="calico-system" Pod="csi-node-driver-v8xmb" WorkloadEndpoint="localhost-k8s-csi--node--driver--v8xmb-eth0" Jun 25 16:20:18.712014 containerd[1287]: 2024-06-25 16:20:18.655 [INFO][3963] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db82eb17e8c4994b3ae46565906ee83b464247dd57ddf3a08d35a7d5a08b753f" HandleID="k8s-pod-network.db82eb17e8c4994b3ae46565906ee83b464247dd57ddf3a08d35a7d5a08b753f" Workload="localhost-k8s-csi--node--driver--v8xmb-eth0" Jun 25 16:20:18.712014 containerd[1287]: 2024-06-25 16:20:18.665 [INFO][3963] ipam_plugin.go 264: Auto assigning IP ContainerID="db82eb17e8c4994b3ae46565906ee83b464247dd57ddf3a08d35a7d5a08b753f" HandleID="k8s-pod-network.db82eb17e8c4994b3ae46565906ee83b464247dd57ddf3a08d35a7d5a08b753f" Workload="localhost-k8s-csi--node--driver--v8xmb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f49d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-v8xmb", "timestamp":"2024-06-25 16:20:18.655506318 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:20:18.712014 containerd[1287]: 2024-06-25 16:20:18.665 [INFO][3963] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:20:18.712014 containerd[1287]: 2024-06-25 16:20:18.665 [INFO][3963] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:20:18.712014 containerd[1287]: 2024-06-25 16:20:18.665 [INFO][3963] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:20:18.712014 containerd[1287]: 2024-06-25 16:20:18.667 [INFO][3963] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.db82eb17e8c4994b3ae46565906ee83b464247dd57ddf3a08d35a7d5a08b753f" host="localhost" Jun 25 16:20:18.712014 containerd[1287]: 2024-06-25 16:20:18.671 [INFO][3963] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:20:18.712014 containerd[1287]: 2024-06-25 16:20:18.674 [INFO][3963] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:20:18.712014 containerd[1287]: 2024-06-25 16:20:18.677 [INFO][3963] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:20:18.712014 containerd[1287]: 2024-06-25 16:20:18.679 [INFO][3963] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:20:18.712014 containerd[1287]: 2024-06-25 16:20:18.679 [INFO][3963] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.db82eb17e8c4994b3ae46565906ee83b464247dd57ddf3a08d35a7d5a08b753f" host="localhost" Jun 25 16:20:18.712014 containerd[1287]: 2024-06-25 16:20:18.682 [INFO][3963] ipam.go 1685: Creating new handle: k8s-pod-network.db82eb17e8c4994b3ae46565906ee83b464247dd57ddf3a08d35a7d5a08b753f Jun 25 16:20:18.712014 containerd[1287]: 2024-06-25 16:20:18.686 [INFO][3963] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.db82eb17e8c4994b3ae46565906ee83b464247dd57ddf3a08d35a7d5a08b753f" host="localhost" Jun 25 16:20:18.712014 containerd[1287]: 2024-06-25 16:20:18.691 [INFO][3963] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.db82eb17e8c4994b3ae46565906ee83b464247dd57ddf3a08d35a7d5a08b753f" host="localhost" Jun 25 16:20:18.712014 containerd[1287]: 2024-06-25 16:20:18.692 [INFO][3963] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.db82eb17e8c4994b3ae46565906ee83b464247dd57ddf3a08d35a7d5a08b753f" host="localhost" Jun 25 16:20:18.712014 containerd[1287]: 2024-06-25 16:20:18.692 [INFO][3963] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:20:18.712014 containerd[1287]: 2024-06-25 16:20:18.692 [INFO][3963] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="db82eb17e8c4994b3ae46565906ee83b464247dd57ddf3a08d35a7d5a08b753f" HandleID="k8s-pod-network.db82eb17e8c4994b3ae46565906ee83b464247dd57ddf3a08d35a7d5a08b753f" Workload="localhost-k8s-csi--node--driver--v8xmb-eth0" Jun 25 16:20:18.712965 containerd[1287]: 2024-06-25 16:20:18.694 [INFO][3940] k8s.go 386: Populated endpoint ContainerID="db82eb17e8c4994b3ae46565906ee83b464247dd57ddf3a08d35a7d5a08b753f" Namespace="calico-system" Pod="csi-node-driver-v8xmb" WorkloadEndpoint="localhost-k8s-csi--node--driver--v8xmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--v8xmb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77a47d09-249f-41aa-9f0e-6a405db06ba3", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 19, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-v8xmb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali28b0a5fe1a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:20:18.712965 containerd[1287]: 2024-06-25 16:20:18.694 [INFO][3940] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="db82eb17e8c4994b3ae46565906ee83b464247dd57ddf3a08d35a7d5a08b753f" Namespace="calico-system" Pod="csi-node-driver-v8xmb" WorkloadEndpoint="localhost-k8s-csi--node--driver--v8xmb-eth0" Jun 25 16:20:18.712965 containerd[1287]: 2024-06-25 16:20:18.694 [INFO][3940] dataplane_linux.go 68: Setting the host side veth name to cali28b0a5fe1a6 ContainerID="db82eb17e8c4994b3ae46565906ee83b464247dd57ddf3a08d35a7d5a08b753f" Namespace="calico-system" Pod="csi-node-driver-v8xmb" WorkloadEndpoint="localhost-k8s-csi--node--driver--v8xmb-eth0" Jun 25 16:20:18.712965 containerd[1287]: 2024-06-25 16:20:18.697 [INFO][3940] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="db82eb17e8c4994b3ae46565906ee83b464247dd57ddf3a08d35a7d5a08b753f" Namespace="calico-system" Pod="csi-node-driver-v8xmb" WorkloadEndpoint="localhost-k8s-csi--node--driver--v8xmb-eth0" Jun 25 16:20:18.712965 containerd[1287]: 2024-06-25 16:20:18.697 [INFO][3940] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="db82eb17e8c4994b3ae46565906ee83b464247dd57ddf3a08d35a7d5a08b753f" Namespace="calico-system" Pod="csi-node-driver-v8xmb" WorkloadEndpoint="localhost-k8s-csi--node--driver--v8xmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--v8xmb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77a47d09-249f-41aa-9f0e-6a405db06ba3", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 19, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"db82eb17e8c4994b3ae46565906ee83b464247dd57ddf3a08d35a7d5a08b753f", Pod:"csi-node-driver-v8xmb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali28b0a5fe1a6", MAC:"72:b6:33:ed:b4:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:20:18.712965 containerd[1287]: 2024-06-25 16:20:18.705 [INFO][3940] k8s.go 500: Wrote updated endpoint to datastore ContainerID="db82eb17e8c4994b3ae46565906ee83b464247dd57ddf3a08d35a7d5a08b753f" Namespace="calico-system" Pod="csi-node-driver-v8xmb" WorkloadEndpoint="localhost-k8s-csi--node--driver--v8xmb-eth0" Jun 25 16:20:18.722000 audit[3988]: NETFILTER_CFG table=filter:104 family=2 entries=34 op=nft_register_chain pid=3988 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:20:18.722000 audit[3988]: SYSCALL arch=c000003e syscall=46 success=yes exit=18640 a0=3 a1=7fff81ce68a0 a2=0 a3=7fff81ce688c items=0 ppid=3705 pid=3988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:18.722000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:20:18.726000 audit[3972]: USER_ACCT pid=3972 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:18.728212 sshd[3972]: Accepted publickey for core from 10.0.0.1 port 39406 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:20:18.728000 audit[3972]: CRED_ACQ pid=3972 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:18.728000 audit[3972]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea376ac00 a2=3 a3=7f8cc0b19480 items=0 ppid=1 pid=3972 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:18.728000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:20:18.728617 sshd[3972]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:20:18.731442 containerd[1287]: time="2024-06-25T16:20:18.731320268Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:20:18.731442 containerd[1287]: time="2024-06-25T16:20:18.731393009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:20:18.731698 containerd[1287]: time="2024-06-25T16:20:18.731653382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:20:18.731698 containerd[1287]: time="2024-06-25T16:20:18.731687117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:20:18.733133 systemd-logind[1271]: New session 13 of user core. Jun 25 16:20:18.736374 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 16:20:18.758359 systemd[1]: Started cri-containerd-db82eb17e8c4994b3ae46565906ee83b464247dd57ddf3a08d35a7d5a08b753f.scope - libcontainer container db82eb17e8c4994b3ae46565906ee83b464247dd57ddf3a08d35a7d5a08b753f. Jun 25 16:20:18.758000 audit[3972]: USER_START pid=3972 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:18.760000 audit[4015]: CRED_ACQ pid=4015 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:18.767000 audit: BPF prog-id=153 op=LOAD Jun 25 16:20:18.767000 audit: BPF prog-id=154 op=LOAD Jun 25 16:20:18.767000 audit[4006]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3996 pid=4006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:18.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462383265623137653863343939346233616534363536353930366565 Jun 25 16:20:18.767000 audit: BPF prog-id=155 op=LOAD Jun 25 16:20:18.767000 audit[4006]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3996 pid=4006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:18.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462383265623137653863343939346233616534363536353930366565 Jun 25 16:20:18.767000 audit: BPF prog-id=155 op=UNLOAD Jun 25 16:20:18.767000 audit: BPF prog-id=154 op=UNLOAD Jun 25 16:20:18.767000 audit: BPF prog-id=156 op=LOAD Jun 25 16:20:18.767000 audit[4006]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3996 pid=4006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:18.767000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462383265623137653863343939346233616534363536353930366565 Jun 25 16:20:18.768350 systemd-resolved[1226]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:20:18.777317 containerd[1287]: time="2024-06-25T16:20:18.777273033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v8xmb,Uid:77a47d09-249f-41aa-9f0e-6a405db06ba3,Namespace:calico-system,Attempt:1,} returns sandbox id \"db82eb17e8c4994b3ae46565906ee83b464247dd57ddf3a08d35a7d5a08b753f\"" Jun 25 16:20:19.039442 sshd[3972]: pam_unix(sshd:session): session closed for user core Jun 25 16:20:19.040000 audit[3972]: USER_END pid=3972 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:19.040000 audit[3972]: CRED_DISP pid=3972 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:19.045971 systemd[1]: sshd@12-10.0.0.61:22-10.0.0.1:39406.service: Deactivated successfully. Jun 25 16:20:19.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.61:22-10.0.0.1:39406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:19.046530 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 16:20:19.047310 systemd-logind[1271]: Session 13 logged out. Waiting for processes to exit. Jun 25 16:20:19.053756 systemd[1]: Started sshd@13-10.0.0.61:22-10.0.0.1:39408.service - OpenSSH per-connection server daemon (10.0.0.1:39408). Jun 25 16:20:19.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.61:22-10.0.0.1:39408 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:19.054929 systemd-logind[1271]: Removed session 13. Jun 25 16:20:19.085000 audit[4038]: USER_ACCT pid=4038 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:19.085973 sshd[4038]: Accepted publickey for core from 10.0.0.1 port 39408 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:20:19.087000 audit[4038]: CRED_ACQ pid=4038 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:19.088000 audit[4038]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffec6ad5840 a2=3 a3=7fc8bac92480 items=0 ppid=1 pid=4038 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:19.088000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:20:19.088924 sshd[4038]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:20:19.096402 systemd-logind[1271]: New session 14 of user core. Jun 25 16:20:19.105453 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 16:20:19.109000 audit[4038]: USER_START pid=4038 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:19.111000 audit[4040]: CRED_ACQ pid=4040 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:19.253279 sshd[4038]: pam_unix(sshd:session): session closed for user core Jun 25 16:20:19.252000 audit[4038]: USER_END pid=4038 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:19.252000 audit[4038]: CRED_DISP pid=4038 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:19.255773 systemd[1]: sshd@13-10.0.0.61:22-10.0.0.1:39408.service: Deactivated successfully. Jun 25 16:20:19.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.61:22-10.0.0.1:39408 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:19.256555 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 16:20:19.257144 systemd-logind[1271]: Session 14 logged out. Waiting for processes to exit. Jun 25 16:20:19.257844 systemd-logind[1271]: Removed session 14. Jun 25 16:20:19.303463 systemd-networkd[1115]: caliea4f0d68d82: Gained IPv6LL Jun 25 16:20:19.418203 containerd[1287]: time="2024-06-25T16:20:19.418139837Z" level=info msg="StopPodSandbox for \"8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09\"" Jun 25 16:20:19.494355 containerd[1287]: 2024-06-25 16:20:19.461 [INFO][4068] k8s.go 608: Cleaning up netns ContainerID="8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" Jun 25 16:20:19.494355 containerd[1287]: 2024-06-25 16:20:19.461 [INFO][4068] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" iface="eth0" netns="/var/run/netns/cni-fec0e2fb-fc09-ecf4-68a5-7ac049765830" Jun 25 16:20:19.494355 containerd[1287]: 2024-06-25 16:20:19.461 [INFO][4068] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" iface="eth0" netns="/var/run/netns/cni-fec0e2fb-fc09-ecf4-68a5-7ac049765830" Jun 25 16:20:19.494355 containerd[1287]: 2024-06-25 16:20:19.461 [INFO][4068] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" iface="eth0" netns="/var/run/netns/cni-fec0e2fb-fc09-ecf4-68a5-7ac049765830" Jun 25 16:20:19.494355 containerd[1287]: 2024-06-25 16:20:19.461 [INFO][4068] k8s.go 615: Releasing IP address(es) ContainerID="8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" Jun 25 16:20:19.494355 containerd[1287]: 2024-06-25 16:20:19.461 [INFO][4068] utils.go 188: Calico CNI releasing IP address ContainerID="8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" Jun 25 16:20:19.494355 containerd[1287]: 2024-06-25 16:20:19.481 [INFO][4076] ipam_plugin.go 411: Releasing address using handleID ContainerID="8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" HandleID="k8s-pod-network.8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" Workload="localhost-k8s-coredns--5dd5756b68--hfnk2-eth0" Jun 25 16:20:19.494355 containerd[1287]: 2024-06-25 16:20:19.482 [INFO][4076] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:20:19.494355 containerd[1287]: 2024-06-25 16:20:19.482 [INFO][4076] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:20:19.494355 containerd[1287]: 2024-06-25 16:20:19.489 [WARNING][4076] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" HandleID="k8s-pod-network.8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" Workload="localhost-k8s-coredns--5dd5756b68--hfnk2-eth0" Jun 25 16:20:19.494355 containerd[1287]: 2024-06-25 16:20:19.489 [INFO][4076] ipam_plugin.go 439: Releasing address using workloadID ContainerID="8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" HandleID="k8s-pod-network.8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" Workload="localhost-k8s-coredns--5dd5756b68--hfnk2-eth0" Jun 25 16:20:19.494355 containerd[1287]: 2024-06-25 16:20:19.491 [INFO][4076] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:20:19.494355 containerd[1287]: 2024-06-25 16:20:19.492 [INFO][4068] k8s.go 621: Teardown processing complete. ContainerID="8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" Jun 25 16:20:19.494820 containerd[1287]: time="2024-06-25T16:20:19.494568489Z" level=info msg="TearDown network for sandbox \"8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09\" successfully" Jun 25 16:20:19.494820 containerd[1287]: time="2024-06-25T16:20:19.494600872Z" level=info msg="StopPodSandbox for \"8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09\" returns successfully" Jun 25 16:20:19.494969 kubelet[2281]: E0625 16:20:19.494938 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:20:19.495311 containerd[1287]: time="2024-06-25T16:20:19.495289479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-hfnk2,Uid:1c3a0a88-0373-4ee8-9c8d-f0282229e96f,Namespace:kube-system,Attempt:1,}" Jun 25 16:20:19.589260 systemd[1]: run-netns-cni\x2dfec0e2fb\x2dfc09\x2decf4\x2d68a5\x2d7ac049765830.mount: Deactivated successfully. Jun 25 16:20:19.606547 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5e30730a46f: link becomes ready Jun 25 16:20:19.606095 systemd-networkd[1115]: cali5e30730a46f: Link UP Jun 25 16:20:19.606246 systemd-networkd[1115]: cali5e30730a46f: Gained carrier Jun 25 16:20:19.618200 containerd[1287]: 2024-06-25 16:20:19.540 [INFO][4084] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--hfnk2-eth0 coredns-5dd5756b68- kube-system 1c3a0a88-0373-4ee8-9c8d-f0282229e96f 882 0 2024-06-25 16:19:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-hfnk2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5e30730a46f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b432e448448f291dce0a888ef03ac5687449aa1a2e84e6e7e2d6901d147b0302" Namespace="kube-system" Pod="coredns-5dd5756b68-hfnk2" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--hfnk2-" Jun 25 16:20:19.618200 containerd[1287]: 2024-06-25 16:20:19.540 [INFO][4084] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b432e448448f291dce0a888ef03ac5687449aa1a2e84e6e7e2d6901d147b0302" Namespace="kube-system" Pod="coredns-5dd5756b68-hfnk2" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--hfnk2-eth0" Jun 25 16:20:19.618200 containerd[1287]: 2024-06-25 16:20:19.562 [INFO][4099] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b432e448448f291dce0a888ef03ac5687449aa1a2e84e6e7e2d6901d147b0302" HandleID="k8s-pod-network.b432e448448f291dce0a888ef03ac5687449aa1a2e84e6e7e2d6901d147b0302" Workload="localhost-k8s-coredns--5dd5756b68--hfnk2-eth0" Jun 25 16:20:19.618200 containerd[1287]: 2024-06-25 16:20:19.569 [INFO][4099] ipam_plugin.go 264: Auto assigning IP ContainerID="b432e448448f291dce0a888ef03ac5687449aa1a2e84e6e7e2d6901d147b0302" HandleID="k8s-pod-network.b432e448448f291dce0a888ef03ac5687449aa1a2e84e6e7e2d6901d147b0302" Workload="localhost-k8s-coredns--5dd5756b68--hfnk2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003088b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-hfnk2", "timestamp":"2024-06-25 16:20:19.562908635 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:20:19.618200 containerd[1287]: 2024-06-25 16:20:19.569 [INFO][4099] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:20:19.618200 containerd[1287]: 2024-06-25 16:20:19.569 [INFO][4099] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:20:19.618200 containerd[1287]: 2024-06-25 16:20:19.569 [INFO][4099] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:20:19.618200 containerd[1287]: 2024-06-25 16:20:19.571 [INFO][4099] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b432e448448f291dce0a888ef03ac5687449aa1a2e84e6e7e2d6901d147b0302" host="localhost" Jun 25 16:20:19.618200 containerd[1287]: 2024-06-25 16:20:19.574 [INFO][4099] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:20:19.618200 containerd[1287]: 2024-06-25 16:20:19.587 [INFO][4099] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:20:19.618200 containerd[1287]: 2024-06-25 16:20:19.589 [INFO][4099] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:20:19.618200 containerd[1287]: 2024-06-25 16:20:19.592 [INFO][4099] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:20:19.618200 containerd[1287]: 2024-06-25 16:20:19.592 [INFO][4099] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b432e448448f291dce0a888ef03ac5687449aa1a2e84e6e7e2d6901d147b0302" host="localhost" Jun 25 16:20:19.618200 containerd[1287]: 2024-06-25 16:20:19.593 [INFO][4099] ipam.go 1685: Creating new handle: k8s-pod-network.b432e448448f291dce0a888ef03ac5687449aa1a2e84e6e7e2d6901d147b0302 Jun 25 16:20:19.618200 containerd[1287]: 2024-06-25 16:20:19.596 [INFO][4099] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b432e448448f291dce0a888ef03ac5687449aa1a2e84e6e7e2d6901d147b0302" host="localhost" Jun 25 16:20:19.618200 containerd[1287]: 2024-06-25 16:20:19.600 [INFO][4099] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b432e448448f291dce0a888ef03ac5687449aa1a2e84e6e7e2d6901d147b0302" host="localhost" Jun 25 16:20:19.618200 containerd[1287]: 2024-06-25 16:20:19.601 [INFO][4099] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b432e448448f291dce0a888ef03ac5687449aa1a2e84e6e7e2d6901d147b0302" host="localhost" Jun 25 16:20:19.618200 containerd[1287]: 2024-06-25 16:20:19.601 [INFO][4099] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:20:19.618200 containerd[1287]: 2024-06-25 16:20:19.601 [INFO][4099] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b432e448448f291dce0a888ef03ac5687449aa1a2e84e6e7e2d6901d147b0302" HandleID="k8s-pod-network.b432e448448f291dce0a888ef03ac5687449aa1a2e84e6e7e2d6901d147b0302" Workload="localhost-k8s-coredns--5dd5756b68--hfnk2-eth0" Jun 25 16:20:19.618788 containerd[1287]: 2024-06-25 16:20:19.602 [INFO][4084] k8s.go 386: Populated endpoint ContainerID="b432e448448f291dce0a888ef03ac5687449aa1a2e84e6e7e2d6901d147b0302" Namespace="kube-system" Pod="coredns-5dd5756b68-hfnk2" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--hfnk2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--hfnk2-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"1c3a0a88-0373-4ee8-9c8d-f0282229e96f", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 19, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-hfnk2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5e30730a46f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:20:19.618788 containerd[1287]: 2024-06-25 16:20:19.603 [INFO][4084] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b432e448448f291dce0a888ef03ac5687449aa1a2e84e6e7e2d6901d147b0302" Namespace="kube-system" Pod="coredns-5dd5756b68-hfnk2" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--hfnk2-eth0" Jun 25 16:20:19.618788 containerd[1287]: 2024-06-25 16:20:19.603 [INFO][4084] dataplane_linux.go 68: Setting the host side veth name to cali5e30730a46f ContainerID="b432e448448f291dce0a888ef03ac5687449aa1a2e84e6e7e2d6901d147b0302" Namespace="kube-system" Pod="coredns-5dd5756b68-hfnk2" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--hfnk2-eth0" Jun 25 16:20:19.618788 containerd[1287]: 2024-06-25 16:20:19.606 [INFO][4084] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b432e448448f291dce0a888ef03ac5687449aa1a2e84e6e7e2d6901d147b0302" Namespace="kube-system" Pod="coredns-5dd5756b68-hfnk2" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--hfnk2-eth0" Jun 25 16:20:19.618788 containerd[1287]: 2024-06-25 16:20:19.606 [INFO][4084] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b432e448448f291dce0a888ef03ac5687449aa1a2e84e6e7e2d6901d147b0302" Namespace="kube-system" Pod="coredns-5dd5756b68-hfnk2" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--hfnk2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--hfnk2-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"1c3a0a88-0373-4ee8-9c8d-f0282229e96f", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 19, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b432e448448f291dce0a888ef03ac5687449aa1a2e84e6e7e2d6901d147b0302", Pod:"coredns-5dd5756b68-hfnk2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5e30730a46f", MAC:"2e:43:77:46:9e:47", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:20:19.618788 containerd[1287]: 2024-06-25 16:20:19.614 [INFO][4084] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b432e448448f291dce0a888ef03ac5687449aa1a2e84e6e7e2d6901d147b0302" Namespace="kube-system" Pod="coredns-5dd5756b68-hfnk2" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--hfnk2-eth0" Jun 25 16:20:19.624000 audit[4120]: NETFILTER_CFG table=filter:105 family=2 entries=42 op=nft_register_chain pid=4120 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:20:19.624000 audit[4120]: SYSCALL arch=c000003e syscall=46 success=yes exit=21524 a0=3 a1=7ffdddf32a00 a2=0 a3=7ffdddf329ec items=0 ppid=3705 pid=4120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:19.624000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:20:19.634061 containerd[1287]: time="2024-06-25T16:20:19.633972887Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:20:19.634061 containerd[1287]: time="2024-06-25T16:20:19.634019057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:20:19.634061 containerd[1287]: time="2024-06-25T16:20:19.634036370Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:20:19.634061 containerd[1287]: time="2024-06-25T16:20:19.634053423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:20:19.652403 systemd[1]: Started cri-containerd-b432e448448f291dce0a888ef03ac5687449aa1a2e84e6e7e2d6901d147b0302.scope - libcontainer container b432e448448f291dce0a888ef03ac5687449aa1a2e84e6e7e2d6901d147b0302. Jun 25 16:20:19.662000 audit: BPF prog-id=157 op=LOAD Jun 25 16:20:19.663000 audit: BPF prog-id=158 op=LOAD Jun 25 16:20:19.663000 audit[4138]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=4129 pid=4138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:19.663000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234333265343438343438663239316463653061383838656630336163 Jun 25 16:20:19.663000 audit: BPF prog-id=159 op=LOAD Jun 25 16:20:19.663000 audit[4138]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=4129 pid=4138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:19.663000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234333265343438343438663239316463653061383838656630336163 Jun 25 16:20:19.663000 audit: BPF prog-id=159 op=UNLOAD Jun 25 16:20:19.663000 audit: BPF prog-id=158 op=UNLOAD Jun 25 16:20:19.663000 audit: BPF prog-id=160 op=LOAD Jun 25 16:20:19.663000 audit[4138]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=4129 pid=4138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:19.663000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234333265343438343438663239316463653061383838656630336163 Jun 25 16:20:19.665112 systemd-resolved[1226]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:20:19.685113 containerd[1287]: time="2024-06-25T16:20:19.685068976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-hfnk2,Uid:1c3a0a88-0373-4ee8-9c8d-f0282229e96f,Namespace:kube-system,Attempt:1,} returns sandbox id \"b432e448448f291dce0a888ef03ac5687449aa1a2e84e6e7e2d6901d147b0302\"" Jun 25 16:20:19.686164 kubelet[2281]: E0625 16:20:19.685674 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:20:19.690781 containerd[1287]: time="2024-06-25T16:20:19.690742232Z" level=info msg="CreateContainer within sandbox \"b432e448448f291dce0a888ef03ac5687449aa1a2e84e6e7e2d6901d147b0302\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 16:20:19.708602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1460453263.mount: Deactivated successfully. Jun 25 16:20:19.717824 containerd[1287]: time="2024-06-25T16:20:19.717784262Z" level=info msg="CreateContainer within sandbox \"b432e448448f291dce0a888ef03ac5687449aa1a2e84e6e7e2d6901d147b0302\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1308e14ab7fdb21b0195b560d4a80f1f58d21f6c760e1e3dd1fa28cf4782e83f\"" Jun 25 16:20:19.718533 containerd[1287]: time="2024-06-25T16:20:19.718366906Z" level=info msg="StartContainer for \"1308e14ab7fdb21b0195b560d4a80f1f58d21f6c760e1e3dd1fa28cf4782e83f\"" Jun 25 16:20:19.741365 systemd[1]: Started cri-containerd-1308e14ab7fdb21b0195b560d4a80f1f58d21f6c760e1e3dd1fa28cf4782e83f.scope - libcontainer container 1308e14ab7fdb21b0195b560d4a80f1f58d21f6c760e1e3dd1fa28cf4782e83f. Jun 25 16:20:19.749000 audit: BPF prog-id=161 op=LOAD Jun 25 16:20:19.750000 audit: BPF prog-id=162 op=LOAD Jun 25 16:20:19.750000 audit[4172]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=4129 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:19.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133303865313461623766646232316230313935623536306434613830 Jun 25 16:20:19.750000 audit: BPF prog-id=163 op=LOAD Jun 25 16:20:19.750000 audit[4172]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=4129 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:19.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133303865313461623766646232316230313935623536306434613830 Jun 25 16:20:19.750000 audit: BPF prog-id=163 op=UNLOAD Jun 25 16:20:19.750000 audit: BPF prog-id=162 op=UNLOAD Jun 25 16:20:19.750000 audit: BPF prog-id=164 op=LOAD Jun 25 16:20:19.750000 audit[4172]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=4129 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:19.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133303865313461623766646232316230313935623536306434613830 Jun 25 16:20:19.763025 containerd[1287]: time="2024-06-25T16:20:19.762969657Z" level=info msg="StartContainer for \"1308e14ab7fdb21b0195b560d4a80f1f58d21f6c760e1e3dd1fa28cf4782e83f\" returns successfully" Jun 25 16:20:20.006696 kubelet[2281]: E0625 16:20:20.006552 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:20:20.114000 audit[4204]: NETFILTER_CFG table=filter:106 family=2 entries=14 op=nft_register_rule pid=4204 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:20.114000 audit[4204]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffcd04b71c0 a2=0 a3=7ffcd04b71ac items=0 ppid=2426 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:20.114000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:20.115000 audit[4204]: NETFILTER_CFG table=nat:107 family=2 entries=14 op=nft_register_rule pid=4204 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:20.115000 audit[4204]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcd04b71c0 a2=0 a3=0 items=0 ppid=2426 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:20.115000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:20.392437 systemd-networkd[1115]: cali28b0a5fe1a6: Gained IPv6LL Jun 25 16:20:20.417739 containerd[1287]: time="2024-06-25T16:20:20.417662614Z" level=info msg="StopPodSandbox for \"9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce\"" Jun 25 16:20:20.461553 kubelet[2281]: I0625 16:20:20.461508 2281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-hfnk2" podStartSLOduration=41.461458574 podCreationTimestamp="2024-06-25 16:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:20:20.045962851 +0000 UTC m=+56.707006703" watchObservedRunningTime="2024-06-25 16:20:20.461458574 +0000 UTC m=+57.122502446" Jun 25 16:20:20.497551 containerd[1287]: 2024-06-25 16:20:20.460 [INFO][4221] k8s.go 608: Cleaning up netns ContainerID="9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" Jun 25 16:20:20.497551 containerd[1287]: 2024-06-25 16:20:20.460 [INFO][4221] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" iface="eth0" netns="/var/run/netns/cni-aec69ca7-c930-0b38-2f0a-eadd9dc03464" Jun 25 16:20:20.497551 containerd[1287]: 2024-06-25 16:20:20.460 [INFO][4221] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" iface="eth0" netns="/var/run/netns/cni-aec69ca7-c930-0b38-2f0a-eadd9dc03464" Jun 25 16:20:20.497551 containerd[1287]: 2024-06-25 16:20:20.460 [INFO][4221] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" iface="eth0" netns="/var/run/netns/cni-aec69ca7-c930-0b38-2f0a-eadd9dc03464" Jun 25 16:20:20.497551 containerd[1287]: 2024-06-25 16:20:20.460 [INFO][4221] k8s.go 615: Releasing IP address(es) ContainerID="9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" Jun 25 16:20:20.497551 containerd[1287]: 2024-06-25 16:20:20.460 [INFO][4221] utils.go 188: Calico CNI releasing IP address ContainerID="9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" Jun 25 16:20:20.497551 containerd[1287]: 2024-06-25 16:20:20.487 [INFO][4228] ipam_plugin.go 411: Releasing address using handleID ContainerID="9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" HandleID="k8s-pod-network.9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" Workload="localhost-k8s-coredns--5dd5756b68--xhllt-eth0" Jun 25 16:20:20.497551 containerd[1287]: 2024-06-25 16:20:20.487 [INFO][4228] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:20:20.497551 containerd[1287]: 2024-06-25 16:20:20.487 [INFO][4228] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:20:20.497551 containerd[1287]: 2024-06-25 16:20:20.493 [WARNING][4228] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" HandleID="k8s-pod-network.9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" Workload="localhost-k8s-coredns--5dd5756b68--xhllt-eth0" Jun 25 16:20:20.497551 containerd[1287]: 2024-06-25 16:20:20.493 [INFO][4228] ipam_plugin.go 439: Releasing address using workloadID ContainerID="9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" HandleID="k8s-pod-network.9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" Workload="localhost-k8s-coredns--5dd5756b68--xhllt-eth0" Jun 25 16:20:20.497551 containerd[1287]: 2024-06-25 16:20:20.495 [INFO][4228] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:20:20.497551 containerd[1287]: 2024-06-25 16:20:20.496 [INFO][4221] k8s.go 621: Teardown processing complete. ContainerID="9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" Jun 25 16:20:20.498173 containerd[1287]: time="2024-06-25T16:20:20.498135200Z" level=info msg="TearDown network for sandbox \"9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce\" successfully" Jun 25 16:20:20.498242 containerd[1287]: time="2024-06-25T16:20:20.498172521Z" level=info msg="StopPodSandbox for \"9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce\" returns successfully" Jun 25 16:20:20.498558 kubelet[2281]: E0625 16:20:20.498536 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:20:20.498962 containerd[1287]: time="2024-06-25T16:20:20.498917527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-xhllt,Uid:e35b0eb9-faf1-4363-bfed-20a104ae884f,Namespace:kube-system,Attempt:1,}" Jun 25 16:20:20.589884 systemd[1]: run-netns-cni\x2daec69ca7\x2dc930\x2d0b38\x2d2f0a\x2deadd9dc03464.mount: Deactivated successfully. Jun 25 16:20:20.626681 systemd-networkd[1115]: calia6db1a58465: Link UP Jun 25 16:20:20.628781 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:20:20.628860 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia6db1a58465: link becomes ready Jun 25 16:20:20.628959 systemd-networkd[1115]: calia6db1a58465: Gained carrier Jun 25 16:20:20.644663 containerd[1287]: 2024-06-25 16:20:20.549 [INFO][4238] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--xhllt-eth0 coredns-5dd5756b68- kube-system e35b0eb9-faf1-4363-bfed-20a104ae884f 899 0 2024-06-25 16:19:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-xhllt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia6db1a58465 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6af6586bcd781539ccdd2945450594fa8283ff92bb2c95fe53e706a89440ff8a" Namespace="kube-system" Pod="coredns-5dd5756b68-xhllt" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--xhllt-" Jun 25 16:20:20.644663 containerd[1287]: 2024-06-25 16:20:20.549 [INFO][4238] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6af6586bcd781539ccdd2945450594fa8283ff92bb2c95fe53e706a89440ff8a" Namespace="kube-system" Pod="coredns-5dd5756b68-xhllt" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--xhllt-eth0" Jun 25 16:20:20.644663 containerd[1287]: 2024-06-25 16:20:20.579 [INFO][4249] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6af6586bcd781539ccdd2945450594fa8283ff92bb2c95fe53e706a89440ff8a" HandleID="k8s-pod-network.6af6586bcd781539ccdd2945450594fa8283ff92bb2c95fe53e706a89440ff8a" Workload="localhost-k8s-coredns--5dd5756b68--xhllt-eth0" Jun 25 16:20:20.644663 containerd[1287]: 2024-06-25 16:20:20.593 [INFO][4249] ipam_plugin.go 264: Auto assigning IP ContainerID="6af6586bcd781539ccdd2945450594fa8283ff92bb2c95fe53e706a89440ff8a" HandleID="k8s-pod-network.6af6586bcd781539ccdd2945450594fa8283ff92bb2c95fe53e706a89440ff8a" Workload="localhost-k8s-coredns--5dd5756b68--xhllt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005b3e10), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-xhllt", "timestamp":"2024-06-25 16:20:20.579774573 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:20:20.644663 containerd[1287]: 2024-06-25 16:20:20.593 [INFO][4249] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:20:20.644663 containerd[1287]: 2024-06-25 16:20:20.593 [INFO][4249] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:20:20.644663 containerd[1287]: 2024-06-25 16:20:20.593 [INFO][4249] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:20:20.644663 containerd[1287]: 2024-06-25 16:20:20.595 [INFO][4249] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6af6586bcd781539ccdd2945450594fa8283ff92bb2c95fe53e706a89440ff8a" host="localhost" Jun 25 16:20:20.644663 containerd[1287]: 2024-06-25 16:20:20.599 [INFO][4249] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:20:20.644663 containerd[1287]: 2024-06-25 16:20:20.603 [INFO][4249] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:20:20.644663 containerd[1287]: 2024-06-25 16:20:20.606 [INFO][4249] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:20:20.644663 containerd[1287]: 2024-06-25 16:20:20.608 [INFO][4249] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:20:20.644663 containerd[1287]: 2024-06-25 16:20:20.608 [INFO][4249] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6af6586bcd781539ccdd2945450594fa8283ff92bb2c95fe53e706a89440ff8a" host="localhost" Jun 25 16:20:20.644663 containerd[1287]: 2024-06-25 16:20:20.609 [INFO][4249] ipam.go 1685: Creating new handle: k8s-pod-network.6af6586bcd781539ccdd2945450594fa8283ff92bb2c95fe53e706a89440ff8a Jun 25 16:20:20.644663 containerd[1287]: 2024-06-25 16:20:20.614 [INFO][4249] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6af6586bcd781539ccdd2945450594fa8283ff92bb2c95fe53e706a89440ff8a" host="localhost" Jun 25 16:20:20.644663 containerd[1287]: 2024-06-25 16:20:20.619 [INFO][4249] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.6af6586bcd781539ccdd2945450594fa8283ff92bb2c95fe53e706a89440ff8a" host="localhost" Jun 25 16:20:20.644663 containerd[1287]: 2024-06-25 16:20:20.619 [INFO][4249] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.6af6586bcd781539ccdd2945450594fa8283ff92bb2c95fe53e706a89440ff8a" host="localhost" Jun 25 16:20:20.644663 containerd[1287]: 2024-06-25 16:20:20.619 [INFO][4249] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:20:20.644663 containerd[1287]: 2024-06-25 16:20:20.619 [INFO][4249] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="6af6586bcd781539ccdd2945450594fa8283ff92bb2c95fe53e706a89440ff8a" HandleID="k8s-pod-network.6af6586bcd781539ccdd2945450594fa8283ff92bb2c95fe53e706a89440ff8a" Workload="localhost-k8s-coredns--5dd5756b68--xhllt-eth0" Jun 25 16:20:20.645292 containerd[1287]: 2024-06-25 16:20:20.621 [INFO][4238] k8s.go 386: Populated endpoint ContainerID="6af6586bcd781539ccdd2945450594fa8283ff92bb2c95fe53e706a89440ff8a" Namespace="kube-system" Pod="coredns-5dd5756b68-xhllt" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--xhllt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--xhllt-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"e35b0eb9-faf1-4363-bfed-20a104ae884f", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 19, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-xhllt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6db1a58465", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:20:20.645292 containerd[1287]: 2024-06-25 16:20:20.622 [INFO][4238] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="6af6586bcd781539ccdd2945450594fa8283ff92bb2c95fe53e706a89440ff8a" Namespace="kube-system" Pod="coredns-5dd5756b68-xhllt" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--xhllt-eth0" Jun 25 16:20:20.645292 containerd[1287]: 2024-06-25 16:20:20.622 [INFO][4238] dataplane_linux.go 68: Setting the host side veth name to calia6db1a58465 ContainerID="6af6586bcd781539ccdd2945450594fa8283ff92bb2c95fe53e706a89440ff8a" Namespace="kube-system" Pod="coredns-5dd5756b68-xhllt" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--xhllt-eth0" Jun 25 16:20:20.645292 containerd[1287]: 2024-06-25 16:20:20.629 [INFO][4238] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="6af6586bcd781539ccdd2945450594fa8283ff92bb2c95fe53e706a89440ff8a" Namespace="kube-system" Pod="coredns-5dd5756b68-xhllt" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--xhllt-eth0" Jun 25 16:20:20.645292 containerd[1287]: 2024-06-25 16:20:20.632 [INFO][4238] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6af6586bcd781539ccdd2945450594fa8283ff92bb2c95fe53e706a89440ff8a" Namespace="kube-system" Pod="coredns-5dd5756b68-xhllt" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--xhllt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--xhllt-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"e35b0eb9-faf1-4363-bfed-20a104ae884f", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 19, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6af6586bcd781539ccdd2945450594fa8283ff92bb2c95fe53e706a89440ff8a", Pod:"coredns-5dd5756b68-xhllt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6db1a58465", MAC:"1e:b5:5e:a4:e5:c5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:20:20.645292 containerd[1287]: 2024-06-25 16:20:20.641 [INFO][4238] k8s.go 500: Wrote updated endpoint to datastore ContainerID="6af6586bcd781539ccdd2945450594fa8283ff92bb2c95fe53e706a89440ff8a" Namespace="kube-system" Pod="coredns-5dd5756b68-xhllt" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--xhllt-eth0" Jun 25 16:20:20.655000 audit[4270]: NETFILTER_CFG table=filter:108 family=2 entries=38 op=nft_register_chain pid=4270 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:20:20.655000 audit[4270]: SYSCALL arch=c000003e syscall=46 success=yes exit=19408 a0=3 a1=7fffb36ee620 a2=0 a3=7fffb36ee60c items=0 ppid=3705 pid=4270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:20.655000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:20:20.681719 containerd[1287]: time="2024-06-25T16:20:20.681252403Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:20:20.681719 containerd[1287]: time="2024-06-25T16:20:20.681680147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:20:20.681719 containerd[1287]: time="2024-06-25T16:20:20.681697240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:20:20.681719 containerd[1287]: time="2024-06-25T16:20:20.681706668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:20:20.702367 systemd[1]: Started cri-containerd-6af6586bcd781539ccdd2945450594fa8283ff92bb2c95fe53e706a89440ff8a.scope - libcontainer container 6af6586bcd781539ccdd2945450594fa8283ff92bb2c95fe53e706a89440ff8a. Jun 25 16:20:20.710000 audit: BPF prog-id=165 op=LOAD Jun 25 16:20:20.710000 audit: BPF prog-id=166 op=LOAD Jun 25 16:20:20.710000 audit[4289]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a5988 a2=78 a3=0 items=0 ppid=4279 pid=4289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:20.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661663635383662636437383135333963636464323934353435303539 Jun 25 16:20:20.710000 audit: BPF prog-id=167 op=LOAD Jun 25 16:20:20.710000 audit[4289]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a5720 a2=78 a3=0 items=0 ppid=4279 pid=4289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:20.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661663635383662636437383135333963636464323934353435303539 Jun 25 16:20:20.710000 audit: BPF prog-id=167 op=UNLOAD Jun 25 16:20:20.710000 audit: BPF prog-id=166 op=UNLOAD Jun 25 16:20:20.710000 audit: BPF prog-id=168 op=LOAD Jun 25 16:20:20.710000 audit[4289]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a5be0 a2=78 a3=0 items=0 ppid=4279 pid=4289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:20.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661663635383662636437383135333963636464323934353435303539 Jun 25 16:20:20.712697 systemd-resolved[1226]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:20:20.740651 containerd[1287]: time="2024-06-25T16:20:20.740602528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-xhllt,Uid:e35b0eb9-faf1-4363-bfed-20a104ae884f,Namespace:kube-system,Attempt:1,} returns sandbox id \"6af6586bcd781539ccdd2945450594fa8283ff92bb2c95fe53e706a89440ff8a\"" Jun 25 16:20:20.741829 kubelet[2281]: E0625 16:20:20.741803 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:20:20.743869 containerd[1287]: time="2024-06-25T16:20:20.743817015Z" level=info msg="CreateContainer within sandbox \"6af6586bcd781539ccdd2945450594fa8283ff92bb2c95fe53e706a89440ff8a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 16:20:20.802907 containerd[1287]: time="2024-06-25T16:20:20.802840620Z" level=info msg="CreateContainer within sandbox \"6af6586bcd781539ccdd2945450594fa8283ff92bb2c95fe53e706a89440ff8a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9025153babf7a89010c8c99e4cc73ea436991849c782c0d3692bcda13f86fd21\"" Jun 25 16:20:20.803529 containerd[1287]: time="2024-06-25T16:20:20.803489842Z" level=info msg="StartContainer for \"9025153babf7a89010c8c99e4cc73ea436991849c782c0d3692bcda13f86fd21\"" Jun 25 16:20:20.829437 systemd[1]: Started cri-containerd-9025153babf7a89010c8c99e4cc73ea436991849c782c0d3692bcda13f86fd21.scope - libcontainer container 9025153babf7a89010c8c99e4cc73ea436991849c782c0d3692bcda13f86fd21. Jun 25 16:20:20.841000 audit: BPF prog-id=169 op=LOAD Jun 25 16:20:20.842000 audit: BPF prog-id=170 op=LOAD Jun 25 16:20:20.842000 audit[4321]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=4279 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:20.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930323531353362616266376138393031306338633939653463633733 Jun 25 16:20:20.842000 audit: BPF prog-id=171 op=LOAD Jun 25 16:20:20.842000 audit[4321]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=4279 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:20.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930323531353362616266376138393031306338633939653463633733 Jun 25 16:20:20.842000 audit: BPF prog-id=171 op=UNLOAD Jun 25 16:20:20.842000 audit: BPF prog-id=170 op=UNLOAD Jun 25 16:20:20.842000 audit: BPF prog-id=172 op=LOAD Jun 25 16:20:20.842000 audit[4321]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=4279 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:20.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930323531353362616266376138393031306338633939653463633733 Jun 25 16:20:20.855792 containerd[1287]: time="2024-06-25T16:20:20.855720493Z" level=info msg="StartContainer for \"9025153babf7a89010c8c99e4cc73ea436991849c782c0d3692bcda13f86fd21\" returns successfully" Jun 25 16:20:21.010882 kubelet[2281]: E0625 16:20:21.010759 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:20:21.011040 kubelet[2281]: E0625 16:20:21.010896 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:20:21.020927 kubelet[2281]: I0625 16:20:21.020872 2281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-xhllt" podStartSLOduration=42.020825083 podCreationTimestamp="2024-06-25 16:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 16:20:21.020285965 +0000 UTC m=+57.681329817" watchObservedRunningTime="2024-06-25 16:20:21.020825083 +0000 UTC m=+57.681868935" Jun 25 16:20:21.030000 audit[4354]: NETFILTER_CFG table=filter:109 family=2 entries=14 op=nft_register_rule pid=4354 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:21.030000 audit[4354]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffce31e0840 a2=0 a3=7ffce31e082c items=0 ppid=2426 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:21.030000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:21.031000 audit[4354]: NETFILTER_CFG table=nat:110 family=2 entries=14 op=nft_register_rule pid=4354 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:21.031000 audit[4354]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffce31e0840 a2=0 a3=0 items=0 ppid=2426 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:21.031000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:21.609791 systemd-networkd[1115]: cali5e30730a46f: Gained IPv6LL Jun 25 16:20:22.013250 kubelet[2281]: E0625 16:20:22.012979 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:20:22.013250 kubelet[2281]: E0625 16:20:22.013140 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:20:22.055410 systemd-networkd[1115]: calia6db1a58465: Gained IPv6LL Jun 25 16:20:22.212000 audit[4366]: NETFILTER_CFG table=filter:111 family=2 entries=11 op=nft_register_rule pid=4366 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:22.212000 audit[4366]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd7645b210 a2=0 a3=7ffd7645b1fc items=0 ppid=2426 pid=4366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:22.212000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:22.213000 audit[4366]: NETFILTER_CFG table=nat:112 family=2 entries=35 op=nft_register_chain pid=4366 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:22.213000 audit[4366]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd7645b210 a2=0 a3=7ffd7645b1fc items=0 ppid=2426 pid=4366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:22.213000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:22.338000 audit[4369]: NETFILTER_CFG table=filter:113 family=2 entries=8 op=nft_register_rule pid=4369 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:22.338000 audit[4369]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fffd70b7580 a2=0 a3=7fffd70b756c items=0 ppid=2426 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:22.338000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:22.348000 audit[4369]: NETFILTER_CFG table=nat:114 family=2 entries=56 op=nft_register_chain pid=4369 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:22.348000 audit[4369]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fffd70b7580 a2=0 a3=7fffd70b756c items=0 ppid=2426 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:22.348000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:22.350782 containerd[1287]: time="2024-06-25T16:20:22.350729067Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:22.354562 containerd[1287]: time="2024-06-25T16:20:22.354463255Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jun 25 16:20:22.356554 containerd[1287]: time="2024-06-25T16:20:22.356511467Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:22.359144 containerd[1287]: time="2024-06-25T16:20:22.359078319Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:22.365455 containerd[1287]: time="2024-06-25T16:20:22.365272232Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:22.366563 containerd[1287]: time="2024-06-25T16:20:22.366132188Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 4.750470101s" Jun 25 16:20:22.366563 containerd[1287]: time="2024-06-25T16:20:22.366204256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jun 25 16:20:22.367936 containerd[1287]: time="2024-06-25T16:20:22.367783356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 16:20:22.382911 containerd[1287]: time="2024-06-25T16:20:22.382852183Z" level=info msg="CreateContainer within sandbox \"fcccaf64449e300c5864f24f1ae673fa3436eb57ee8eb9264a3d5e81a65d7202\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 16:20:22.427255 containerd[1287]: time="2024-06-25T16:20:22.427185655Z" level=info msg="CreateContainer within sandbox \"fcccaf64449e300c5864f24f1ae673fa3436eb57ee8eb9264a3d5e81a65d7202\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4dbb52215336eae11b0e1164a4eba4a5cf0d9c0ba7afa97bdeb7cb85149c3300\"" Jun 25 16:20:22.427966 containerd[1287]: time="2024-06-25T16:20:22.427932312Z" level=info msg="StartContainer for \"4dbb52215336eae11b0e1164a4eba4a5cf0d9c0ba7afa97bdeb7cb85149c3300\"" Jun 25 16:20:22.455546 systemd[1]: Started cri-containerd-4dbb52215336eae11b0e1164a4eba4a5cf0d9c0ba7afa97bdeb7cb85149c3300.scope - libcontainer container 4dbb52215336eae11b0e1164a4eba4a5cf0d9c0ba7afa97bdeb7cb85149c3300. Jun 25 16:20:22.471000 audit: BPF prog-id=173 op=LOAD Jun 25 16:20:22.472000 audit: BPF prog-id=174 op=LOAD Jun 25 16:20:22.472000 audit[4381]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=3869 pid=4381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:22.472000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464626235323231353333366561653131623065313136346134656261 Jun 25 16:20:22.473000 audit: BPF prog-id=175 op=LOAD Jun 25 16:20:22.473000 audit[4381]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=3869 pid=4381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:22.473000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464626235323231353333366561653131623065313136346134656261 Jun 25 16:20:22.473000 audit: BPF prog-id=175 op=UNLOAD Jun 25 16:20:22.473000 audit: BPF prog-id=174 op=UNLOAD Jun 25 16:20:22.473000 audit: BPF prog-id=176 op=LOAD Jun 25 16:20:22.473000 audit[4381]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=3869 pid=4381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:22.473000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464626235323231353333366561653131623065313136346134656261 Jun 25 16:20:22.531161 containerd[1287]: time="2024-06-25T16:20:22.530999066Z" level=info msg="StartContainer for \"4dbb52215336eae11b0e1164a4eba4a5cf0d9c0ba7afa97bdeb7cb85149c3300\" returns successfully" Jun 25 16:20:23.017493 kubelet[2281]: E0625 16:20:23.017385 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:20:23.018353 kubelet[2281]: E0625 16:20:23.018177 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:20:23.407421 containerd[1287]: time="2024-06-25T16:20:23.407376355Z" level=info msg="StopPodSandbox for \"7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa\"" Jun 25 16:20:23.496760 containerd[1287]: 2024-06-25 16:20:23.448 [WARNING][4425] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5688959d9c--2jz59-eth0", GenerateName:"calico-kube-controllers-5688959d9c-", Namespace:"calico-system", SelfLink:"", UID:"8c346c41-8c1c-4ee1-9f01-b8cf91cb6c91", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 19, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5688959d9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fcccaf64449e300c5864f24f1ae673fa3436eb57ee8eb9264a3d5e81a65d7202", Pod:"calico-kube-controllers-5688959d9c-2jz59", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliea4f0d68d82", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:20:23.496760 containerd[1287]: 2024-06-25 16:20:23.448 [INFO][4425] k8s.go 608: Cleaning up netns ContainerID="7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" Jun 25 16:20:23.496760 containerd[1287]: 2024-06-25 16:20:23.448 [INFO][4425] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" iface="eth0" netns="" Jun 25 16:20:23.496760 containerd[1287]: 2024-06-25 16:20:23.448 [INFO][4425] k8s.go 615: Releasing IP address(es) ContainerID="7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" Jun 25 16:20:23.496760 containerd[1287]: 2024-06-25 16:20:23.448 [INFO][4425] utils.go 188: Calico CNI releasing IP address ContainerID="7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" Jun 25 16:20:23.496760 containerd[1287]: 2024-06-25 16:20:23.480 [INFO][4435] ipam_plugin.go 411: Releasing address using handleID ContainerID="7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" HandleID="k8s-pod-network.7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" Workload="localhost-k8s-calico--kube--controllers--5688959d9c--2jz59-eth0" Jun 25 16:20:23.496760 containerd[1287]: 2024-06-25 16:20:23.481 [INFO][4435] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:20:23.496760 containerd[1287]: 2024-06-25 16:20:23.481 [INFO][4435] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:20:23.496760 containerd[1287]: 2024-06-25 16:20:23.491 [WARNING][4435] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" HandleID="k8s-pod-network.7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" Workload="localhost-k8s-calico--kube--controllers--5688959d9c--2jz59-eth0" Jun 25 16:20:23.496760 containerd[1287]: 2024-06-25 16:20:23.491 [INFO][4435] ipam_plugin.go 439: Releasing address using workloadID ContainerID="7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" HandleID="k8s-pod-network.7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" Workload="localhost-k8s-calico--kube--controllers--5688959d9c--2jz59-eth0" Jun 25 16:20:23.496760 containerd[1287]: 2024-06-25 16:20:23.493 [INFO][4435] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:20:23.496760 containerd[1287]: 2024-06-25 16:20:23.495 [INFO][4425] k8s.go 621: Teardown processing complete. ContainerID="7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" Jun 25 16:20:23.496760 containerd[1287]: time="2024-06-25T16:20:23.496610572Z" level=info msg="TearDown network for sandbox \"7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa\" successfully" Jun 25 16:20:23.496760 containerd[1287]: time="2024-06-25T16:20:23.496654697Z" level=info msg="StopPodSandbox for \"7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa\" returns successfully" Jun 25 16:20:23.501858 containerd[1287]: time="2024-06-25T16:20:23.501789755Z" level=info msg="RemovePodSandbox for \"7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa\"" Jun 25 16:20:23.511594 containerd[1287]: time="2024-06-25T16:20:23.505172272Z" level=info msg="Forcibly stopping sandbox \"7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa\"" Jun 25 16:20:23.671013 containerd[1287]: 2024-06-25 16:20:23.626 [WARNING][4457] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5688959d9c--2jz59-eth0", GenerateName:"calico-kube-controllers-5688959d9c-", Namespace:"calico-system", SelfLink:"", UID:"8c346c41-8c1c-4ee1-9f01-b8cf91cb6c91", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 19, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5688959d9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fcccaf64449e300c5864f24f1ae673fa3436eb57ee8eb9264a3d5e81a65d7202", Pod:"calico-kube-controllers-5688959d9c-2jz59", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliea4f0d68d82", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:20:23.671013 containerd[1287]: 2024-06-25 16:20:23.626 [INFO][4457] k8s.go 608: Cleaning up netns ContainerID="7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" Jun 25 16:20:23.671013 containerd[1287]: 2024-06-25 16:20:23.627 [INFO][4457] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" iface="eth0" netns="" Jun 25 16:20:23.671013 containerd[1287]: 2024-06-25 16:20:23.627 [INFO][4457] k8s.go 615: Releasing IP address(es) ContainerID="7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" Jun 25 16:20:23.671013 containerd[1287]: 2024-06-25 16:20:23.627 [INFO][4457] utils.go 188: Calico CNI releasing IP address ContainerID="7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" Jun 25 16:20:23.671013 containerd[1287]: 2024-06-25 16:20:23.651 [INFO][4464] ipam_plugin.go 411: Releasing address using handleID ContainerID="7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" HandleID="k8s-pod-network.7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" Workload="localhost-k8s-calico--kube--controllers--5688959d9c--2jz59-eth0" Jun 25 16:20:23.671013 containerd[1287]: 2024-06-25 16:20:23.651 [INFO][4464] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:20:23.671013 containerd[1287]: 2024-06-25 16:20:23.651 [INFO][4464] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:20:23.671013 containerd[1287]: 2024-06-25 16:20:23.660 [WARNING][4464] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" HandleID="k8s-pod-network.7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" Workload="localhost-k8s-calico--kube--controllers--5688959d9c--2jz59-eth0" Jun 25 16:20:23.671013 containerd[1287]: 2024-06-25 16:20:23.660 [INFO][4464] ipam_plugin.go 439: Releasing address using workloadID ContainerID="7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" HandleID="k8s-pod-network.7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" Workload="localhost-k8s-calico--kube--controllers--5688959d9c--2jz59-eth0" Jun 25 16:20:23.671013 containerd[1287]: 2024-06-25 16:20:23.662 [INFO][4464] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:20:23.671013 containerd[1287]: 2024-06-25 16:20:23.666 [INFO][4457] k8s.go 621: Teardown processing complete. ContainerID="7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa" Jun 25 16:20:23.671013 containerd[1287]: time="2024-06-25T16:20:23.669228621Z" level=info msg="TearDown network for sandbox \"7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa\" successfully" Jun 25 16:20:23.952775 containerd[1287]: time="2024-06-25T16:20:23.952615129Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:20:23.952775 containerd[1287]: time="2024-06-25T16:20:23.952696876Z" level=info msg="RemovePodSandbox \"7669543c82a052958c48053d8c8993428ef29788647746f806acc0b074752afa\" returns successfully" Jun 25 16:20:23.953270 containerd[1287]: time="2024-06-25T16:20:23.953235303Z" level=info msg="StopPodSandbox for \"9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce\"" Jun 25 16:20:24.011409 containerd[1287]: 2024-06-25 16:20:23.986 [WARNING][4487] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--xhllt-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"e35b0eb9-faf1-4363-bfed-20a104ae884f", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 19, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6af6586bcd781539ccdd2945450594fa8283ff92bb2c95fe53e706a89440ff8a", Pod:"coredns-5dd5756b68-xhllt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6db1a58465", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:20:24.011409 containerd[1287]: 2024-06-25 16:20:23.986 [INFO][4487] k8s.go 608: Cleaning up netns ContainerID="9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" Jun 25 16:20:24.011409 containerd[1287]: 2024-06-25 16:20:23.986 [INFO][4487] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" iface="eth0" netns="" Jun 25 16:20:24.011409 containerd[1287]: 2024-06-25 16:20:23.986 [INFO][4487] k8s.go 615: Releasing IP address(es) ContainerID="9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" Jun 25 16:20:24.011409 containerd[1287]: 2024-06-25 16:20:23.986 [INFO][4487] utils.go 188: Calico CNI releasing IP address ContainerID="9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" Jun 25 16:20:24.011409 containerd[1287]: 2024-06-25 16:20:24.002 [INFO][4494] ipam_plugin.go 411: Releasing address using handleID ContainerID="9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" HandleID="k8s-pod-network.9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" Workload="localhost-k8s-coredns--5dd5756b68--xhllt-eth0" Jun 25 16:20:24.011409 containerd[1287]: 2024-06-25 16:20:24.002 [INFO][4494] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:20:24.011409 containerd[1287]: 2024-06-25 16:20:24.003 [INFO][4494] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:20:24.011409 containerd[1287]: 2024-06-25 16:20:24.007 [WARNING][4494] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" HandleID="k8s-pod-network.9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" Workload="localhost-k8s-coredns--5dd5756b68--xhllt-eth0" Jun 25 16:20:24.011409 containerd[1287]: 2024-06-25 16:20:24.007 [INFO][4494] ipam_plugin.go 439: Releasing address using workloadID ContainerID="9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" HandleID="k8s-pod-network.9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" Workload="localhost-k8s-coredns--5dd5756b68--xhllt-eth0" Jun 25 16:20:24.011409 containerd[1287]: 2024-06-25 16:20:24.009 [INFO][4494] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:20:24.011409 containerd[1287]: 2024-06-25 16:20:24.010 [INFO][4487] k8s.go 621: Teardown processing complete. ContainerID="9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" Jun 25 16:20:24.011859 containerd[1287]: time="2024-06-25T16:20:24.011454564Z" level=info msg="TearDown network for sandbox \"9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce\" successfully" Jun 25 16:20:24.011859 containerd[1287]: time="2024-06-25T16:20:24.011492906Z" level=info msg="StopPodSandbox for \"9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce\" returns successfully" Jun 25 16:20:24.012018 containerd[1287]: time="2024-06-25T16:20:24.011978531Z" level=info msg="RemovePodSandbox for \"9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce\"" Jun 25 16:20:24.012058 containerd[1287]: time="2024-06-25T16:20:24.012013598Z" level=info msg="Forcibly stopping sandbox \"9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce\"" Jun 25 16:20:24.020469 kubelet[2281]: E0625 16:20:24.020419 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:20:24.083716 containerd[1287]: 2024-06-25 16:20:24.048 [WARNING][4517] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--xhllt-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"e35b0eb9-faf1-4363-bfed-20a104ae884f", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 19, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6af6586bcd781539ccdd2945450594fa8283ff92bb2c95fe53e706a89440ff8a", Pod:"coredns-5dd5756b68-xhllt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6db1a58465", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:20:24.083716 containerd[1287]: 2024-06-25 16:20:24.048 [INFO][4517] k8s.go 608: Cleaning up netns ContainerID="9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" Jun 25 16:20:24.083716 containerd[1287]: 2024-06-25 16:20:24.048 [INFO][4517] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" iface="eth0" netns="" Jun 25 16:20:24.083716 containerd[1287]: 2024-06-25 16:20:24.048 [INFO][4517] k8s.go 615: Releasing IP address(es) ContainerID="9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" Jun 25 16:20:24.083716 containerd[1287]: 2024-06-25 16:20:24.048 [INFO][4517] utils.go 188: Calico CNI releasing IP address ContainerID="9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" Jun 25 16:20:24.083716 containerd[1287]: 2024-06-25 16:20:24.065 [INFO][4541] ipam_plugin.go 411: Releasing address using handleID ContainerID="9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" HandleID="k8s-pod-network.9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" Workload="localhost-k8s-coredns--5dd5756b68--xhllt-eth0" Jun 25 16:20:24.083716 containerd[1287]: 2024-06-25 16:20:24.065 [INFO][4541] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:20:24.083716 containerd[1287]: 2024-06-25 16:20:24.065 [INFO][4541] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:20:24.083716 containerd[1287]: 2024-06-25 16:20:24.078 [WARNING][4541] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" HandleID="k8s-pod-network.9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" Workload="localhost-k8s-coredns--5dd5756b68--xhllt-eth0" Jun 25 16:20:24.083716 containerd[1287]: 2024-06-25 16:20:24.078 [INFO][4541] ipam_plugin.go 439: Releasing address using workloadID ContainerID="9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" HandleID="k8s-pod-network.9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" Workload="localhost-k8s-coredns--5dd5756b68--xhllt-eth0" Jun 25 16:20:24.083716 containerd[1287]: 2024-06-25 16:20:24.080 [INFO][4541] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:20:24.083716 containerd[1287]: 2024-06-25 16:20:24.082 [INFO][4517] k8s.go 621: Teardown processing complete. ContainerID="9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce" Jun 25 16:20:24.084172 containerd[1287]: time="2024-06-25T16:20:24.083757296Z" level=info msg="TearDown network for sandbox \"9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce\" successfully" Jun 25 16:20:24.088309 kubelet[2281]: I0625 16:20:24.085935 2281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5688959d9c-2jz59" podStartSLOduration=30.333803387 podCreationTimestamp="2024-06-25 16:19:49 +0000 UTC" firstStartedPulling="2024-06-25 16:20:17.614891458 +0000 UTC m=+54.275935310" lastFinishedPulling="2024-06-25 16:20:22.366973928 +0000 UTC m=+59.028017780" observedRunningTime="2024-06-25 16:20:23.036672839 +0000 UTC m=+59.697716691" watchObservedRunningTime="2024-06-25 16:20:24.085885857 +0000 UTC m=+60.746929729" Jun 25 16:20:24.168374 containerd[1287]: time="2024-06-25T16:20:24.168308674Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:20:24.168692 containerd[1287]: time="2024-06-25T16:20:24.168381424Z" level=info msg="RemovePodSandbox \"9dd87e27a30bc91fbaf747e0138e9a569d01c660fb95cdf5b03d399368d206ce\" returns successfully" Jun 25 16:20:24.168884 containerd[1287]: time="2024-06-25T16:20:24.168861888Z" level=info msg="StopPodSandbox for \"728ae3e3d3bdfb163d18c759b203cb7f947730d73c1306e108762ba4d2cd926f\"" Jun 25 16:20:24.169001 containerd[1287]: time="2024-06-25T16:20:24.168954866Z" level=info msg="TearDown network for sandbox \"728ae3e3d3bdfb163d18c759b203cb7f947730d73c1306e108762ba4d2cd926f\" successfully" Jun 25 16:20:24.169001 containerd[1287]: time="2024-06-25T16:20:24.169001546Z" level=info msg="StopPodSandbox for \"728ae3e3d3bdfb163d18c759b203cb7f947730d73c1306e108762ba4d2cd926f\" returns successfully" Jun 25 16:20:24.169245 containerd[1287]: time="2024-06-25T16:20:24.169208404Z" level=info msg="RemovePodSandbox for \"728ae3e3d3bdfb163d18c759b203cb7f947730d73c1306e108762ba4d2cd926f\"" Jun 25 16:20:24.169371 containerd[1287]: time="2024-06-25T16:20:24.169323064Z" level=info msg="Forcibly stopping sandbox \"728ae3e3d3bdfb163d18c759b203cb7f947730d73c1306e108762ba4d2cd926f\"" Jun 25 16:20:24.169427 containerd[1287]: time="2024-06-25T16:20:24.169387468Z" level=info msg="TearDown network for sandbox \"728ae3e3d3bdfb163d18c759b203cb7f947730d73c1306e108762ba4d2cd926f\" successfully" Jun 25 16:20:24.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.61:22-10.0.0.1:39410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:24.269232 systemd[1]: Started sshd@14-10.0.0.61:22-10.0.0.1:39410.service - OpenSSH per-connection server daemon (10.0.0.1:39410). Jun 25 16:20:24.270823 kernel: kauditd_printk_skb: 128 callbacks suppressed Jun 25 16:20:24.270899 kernel: audit: type=1130 audit(1719332424.268:659): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.61:22-10.0.0.1:39410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:24.297000 audit[4554]: USER_ACCT pid=4554 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:24.298979 sshd[4554]: Accepted publickey for core from 10.0.0.1 port 39410 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:20:24.300270 sshd[4554]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:20:24.298000 audit[4554]: CRED_ACQ pid=4554 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:24.306427 systemd-logind[1271]: New session 15 of user core. Jun 25 16:20:24.316543 kernel: audit: type=1101 audit(1719332424.297:660): pid=4554 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:24.316586 kernel: audit: type=1103 audit(1719332424.298:661): pid=4554 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:24.316622 kernel: audit: type=1006 audit(1719332424.298:662): pid=4554 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jun 25 16:20:24.316641 kernel: audit: type=1300 audit(1719332424.298:662): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc95a0eb00 a2=3 a3=7f41e457d480 items=0 ppid=1 pid=4554 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:24.316814 kernel: audit: type=1327 audit(1719332424.298:662): proctitle=737368643A20636F7265205B707269765D Jun 25 16:20:24.298000 audit[4554]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc95a0eb00 a2=3 a3=7f41e457d480 items=0 ppid=1 pid=4554 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:24.298000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:20:24.316445 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 16:20:24.320000 audit[4554]: USER_START pid=4554 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:24.321000 audit[4556]: CRED_ACQ pid=4556 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:24.329686 kernel: audit: type=1105 audit(1719332424.320:663): pid=4554 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:24.329750 kernel: audit: type=1103 audit(1719332424.321:664): pid=4556 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:24.420842 containerd[1287]: time="2024-06-25T16:20:24.420777271Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"728ae3e3d3bdfb163d18c759b203cb7f947730d73c1306e108762ba4d2cd926f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:20:24.421235 containerd[1287]: time="2024-06-25T16:20:24.420858377Z" level=info msg="RemovePodSandbox \"728ae3e3d3bdfb163d18c759b203cb7f947730d73c1306e108762ba4d2cd926f\" returns successfully" Jun 25 16:20:24.421720 containerd[1287]: time="2024-06-25T16:20:24.421686840Z" level=info msg="StopPodSandbox for \"8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09\"" Jun 25 16:20:24.498911 kubelet[2281]: E0625 16:20:24.498873 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:20:24.502377 sshd[4554]: pam_unix(sshd:session): session closed for user core Jun 25 16:20:24.508000 audit[4554]: USER_END pid=4554 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:24.510742 containerd[1287]: 2024-06-25 16:20:24.458 [WARNING][4604] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--hfnk2-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"1c3a0a88-0373-4ee8-9c8d-f0282229e96f", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 19, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b432e448448f291dce0a888ef03ac5687449aa1a2e84e6e7e2d6901d147b0302", Pod:"coredns-5dd5756b68-hfnk2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5e30730a46f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:20:24.510742 containerd[1287]: 2024-06-25 16:20:24.458 [INFO][4604] k8s.go 608: Cleaning up netns ContainerID="8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" Jun 25 16:20:24.510742 containerd[1287]: 2024-06-25 16:20:24.458 [INFO][4604] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" iface="eth0" netns="" Jun 25 16:20:24.510742 containerd[1287]: 2024-06-25 16:20:24.458 [INFO][4604] k8s.go 615: Releasing IP address(es) ContainerID="8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" Jun 25 16:20:24.510742 containerd[1287]: 2024-06-25 16:20:24.459 [INFO][4604] utils.go 188: Calico CNI releasing IP address ContainerID="8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" Jun 25 16:20:24.510742 containerd[1287]: 2024-06-25 16:20:24.487 [INFO][4611] ipam_plugin.go 411: Releasing address using handleID ContainerID="8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" HandleID="k8s-pod-network.8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" Workload="localhost-k8s-coredns--5dd5756b68--hfnk2-eth0" Jun 25 16:20:24.510742 containerd[1287]: 2024-06-25 16:20:24.487 [INFO][4611] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:20:24.510742 containerd[1287]: 2024-06-25 16:20:24.487 [INFO][4611] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:20:24.510742 containerd[1287]: 2024-06-25 16:20:24.497 [WARNING][4611] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" HandleID="k8s-pod-network.8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" Workload="localhost-k8s-coredns--5dd5756b68--hfnk2-eth0" Jun 25 16:20:24.510742 containerd[1287]: 2024-06-25 16:20:24.499 [INFO][4611] ipam_plugin.go 439: Releasing address using workloadID ContainerID="8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" HandleID="k8s-pod-network.8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" Workload="localhost-k8s-coredns--5dd5756b68--hfnk2-eth0" Jun 25 16:20:24.510742 containerd[1287]: 2024-06-25 16:20:24.501 [INFO][4611] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:20:24.510742 containerd[1287]: 2024-06-25 16:20:24.508 [INFO][4604] k8s.go 621: Teardown processing complete. ContainerID="8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" Jun 25 16:20:24.511075 containerd[1287]: time="2024-06-25T16:20:24.510773787Z" level=info msg="TearDown network for sandbox \"8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09\" successfully" Jun 25 16:20:24.511075 containerd[1287]: time="2024-06-25T16:20:24.510803264Z" level=info msg="StopPodSandbox for \"8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09\" returns successfully" Jun 25 16:20:24.510875 systemd[1]: sshd@14-10.0.0.61:22-10.0.0.1:39410.service: Deactivated successfully. Jun 25 16:20:24.511714 systemd-logind[1271]: Session 15 logged out. Waiting for processes to exit. Jun 25 16:20:24.511743 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 16:20:24.512606 systemd-logind[1271]: Removed session 15. Jun 25 16:20:24.514287 containerd[1287]: time="2024-06-25T16:20:24.514265770Z" level=info msg="RemovePodSandbox for \"8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09\"" Jun 25 16:20:24.514351 containerd[1287]: time="2024-06-25T16:20:24.514295438Z" level=info msg="Forcibly stopping sandbox \"8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09\"" Jun 25 16:20:24.508000 audit[4554]: CRED_DISP pid=4554 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:24.519073 kernel: audit: type=1106 audit(1719332424.508:665): pid=4554 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:24.519124 kernel: audit: type=1104 audit(1719332424.508:666): pid=4554 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:24.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.61:22-10.0.0.1:39410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:24.766080 containerd[1287]: 2024-06-25 16:20:24.727 [WARNING][4636] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--hfnk2-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"1c3a0a88-0373-4ee8-9c8d-f0282229e96f", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 19, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b432e448448f291dce0a888ef03ac5687449aa1a2e84e6e7e2d6901d147b0302", Pod:"coredns-5dd5756b68-hfnk2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5e30730a46f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:20:24.766080 containerd[1287]: 2024-06-25 16:20:24.727 [INFO][4636] k8s.go 608: Cleaning up netns ContainerID="8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" Jun 25 16:20:24.766080 containerd[1287]: 2024-06-25 16:20:24.728 [INFO][4636] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" iface="eth0" netns="" Jun 25 16:20:24.766080 containerd[1287]: 2024-06-25 16:20:24.728 [INFO][4636] k8s.go 615: Releasing IP address(es) ContainerID="8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" Jun 25 16:20:24.766080 containerd[1287]: 2024-06-25 16:20:24.728 [INFO][4636] utils.go 188: Calico CNI releasing IP address ContainerID="8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" Jun 25 16:20:24.766080 containerd[1287]: 2024-06-25 16:20:24.750 [INFO][4643] ipam_plugin.go 411: Releasing address using handleID ContainerID="8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" HandleID="k8s-pod-network.8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" Workload="localhost-k8s-coredns--5dd5756b68--hfnk2-eth0" Jun 25 16:20:24.766080 containerd[1287]: 2024-06-25 16:20:24.750 [INFO][4643] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:20:24.766080 containerd[1287]: 2024-06-25 16:20:24.750 [INFO][4643] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:20:24.766080 containerd[1287]: 2024-06-25 16:20:24.756 [WARNING][4643] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" HandleID="k8s-pod-network.8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" Workload="localhost-k8s-coredns--5dd5756b68--hfnk2-eth0" Jun 25 16:20:24.766080 containerd[1287]: 2024-06-25 16:20:24.756 [INFO][4643] ipam_plugin.go 439: Releasing address using workloadID ContainerID="8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" HandleID="k8s-pod-network.8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" Workload="localhost-k8s-coredns--5dd5756b68--hfnk2-eth0" Jun 25 16:20:24.766080 containerd[1287]: 2024-06-25 16:20:24.760 [INFO][4643] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:20:24.766080 containerd[1287]: 2024-06-25 16:20:24.764 [INFO][4636] k8s.go 621: Teardown processing complete. ContainerID="8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09" Jun 25 16:20:24.766680 containerd[1287]: time="2024-06-25T16:20:24.766118232Z" level=info msg="TearDown network for sandbox \"8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09\" successfully" Jun 25 16:20:24.769707 containerd[1287]: time="2024-06-25T16:20:24.769637247Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:20:24.769863 containerd[1287]: time="2024-06-25T16:20:24.769717612Z" level=info msg="RemovePodSandbox \"8d7ca8106fda555b36a01c75b7fbb5dcb36b76e751aba70062262566328e8c09\" returns successfully" Jun 25 16:20:24.770293 containerd[1287]: time="2024-06-25T16:20:24.770260164Z" level=info msg="StopPodSandbox for \"1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2\"" Jun 25 16:20:24.839372 containerd[1287]: 2024-06-25 16:20:24.806 [WARNING][4665] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--v8xmb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77a47d09-249f-41aa-9f0e-6a405db06ba3", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 19, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"db82eb17e8c4994b3ae46565906ee83b464247dd57ddf3a08d35a7d5a08b753f", Pod:"csi-node-driver-v8xmb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali28b0a5fe1a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:20:24.839372 containerd[1287]: 2024-06-25 16:20:24.806 [INFO][4665] k8s.go 608: Cleaning up netns ContainerID="1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" Jun 25 16:20:24.839372 containerd[1287]: 2024-06-25 16:20:24.806 [INFO][4665] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" iface="eth0" netns="" Jun 25 16:20:24.839372 containerd[1287]: 2024-06-25 16:20:24.806 [INFO][4665] k8s.go 615: Releasing IP address(es) ContainerID="1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" Jun 25 16:20:24.839372 containerd[1287]: 2024-06-25 16:20:24.806 [INFO][4665] utils.go 188: Calico CNI releasing IP address ContainerID="1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" Jun 25 16:20:24.839372 containerd[1287]: 2024-06-25 16:20:24.827 [INFO][4673] ipam_plugin.go 411: Releasing address using handleID ContainerID="1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" HandleID="k8s-pod-network.1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" Workload="localhost-k8s-csi--node--driver--v8xmb-eth0" Jun 25 16:20:24.839372 containerd[1287]: 2024-06-25 16:20:24.827 [INFO][4673] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:20:24.839372 containerd[1287]: 2024-06-25 16:20:24.827 [INFO][4673] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:20:24.839372 containerd[1287]: 2024-06-25 16:20:24.834 [WARNING][4673] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" HandleID="k8s-pod-network.1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" Workload="localhost-k8s-csi--node--driver--v8xmb-eth0" Jun 25 16:20:24.839372 containerd[1287]: 2024-06-25 16:20:24.834 [INFO][4673] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" HandleID="k8s-pod-network.1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" Workload="localhost-k8s-csi--node--driver--v8xmb-eth0" Jun 25 16:20:24.839372 containerd[1287]: 2024-06-25 16:20:24.836 [INFO][4673] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:20:24.839372 containerd[1287]: 2024-06-25 16:20:24.837 [INFO][4665] k8s.go 621: Teardown processing complete. ContainerID="1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" Jun 25 16:20:24.840049 containerd[1287]: time="2024-06-25T16:20:24.840010530Z" level=info msg="TearDown network for sandbox \"1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2\" successfully" Jun 25 16:20:24.840123 containerd[1287]: time="2024-06-25T16:20:24.840106264Z" level=info msg="StopPodSandbox for \"1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2\" returns successfully" Jun 25 16:20:24.840685 containerd[1287]: time="2024-06-25T16:20:24.840635061Z" level=info msg="RemovePodSandbox for \"1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2\"" Jun 25 16:20:24.840754 containerd[1287]: time="2024-06-25T16:20:24.840688263Z" level=info msg="Forcibly stopping sandbox \"1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2\"" Jun 25 16:20:24.923914 containerd[1287]: 2024-06-25 16:20:24.873 [WARNING][4697] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--v8xmb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77a47d09-249f-41aa-9f0e-6a405db06ba3", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 19, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"db82eb17e8c4994b3ae46565906ee83b464247dd57ddf3a08d35a7d5a08b753f", Pod:"csi-node-driver-v8xmb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali28b0a5fe1a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:20:24.923914 containerd[1287]: 2024-06-25 16:20:24.873 [INFO][4697] k8s.go 608: Cleaning up netns ContainerID="1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" Jun 25 16:20:24.923914 containerd[1287]: 2024-06-25 16:20:24.873 [INFO][4697] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" iface="eth0" netns="" Jun 25 16:20:24.923914 containerd[1287]: 2024-06-25 16:20:24.873 [INFO][4697] k8s.go 615: Releasing IP address(es) ContainerID="1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" Jun 25 16:20:24.923914 containerd[1287]: 2024-06-25 16:20:24.874 [INFO][4697] utils.go 188: Calico CNI releasing IP address ContainerID="1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" Jun 25 16:20:24.923914 containerd[1287]: 2024-06-25 16:20:24.911 [INFO][4705] ipam_plugin.go 411: Releasing address using handleID ContainerID="1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" HandleID="k8s-pod-network.1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" Workload="localhost-k8s-csi--node--driver--v8xmb-eth0" Jun 25 16:20:24.923914 containerd[1287]: 2024-06-25 16:20:24.911 [INFO][4705] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:20:24.923914 containerd[1287]: 2024-06-25 16:20:24.911 [INFO][4705] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:20:24.923914 containerd[1287]: 2024-06-25 16:20:24.917 [WARNING][4705] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" HandleID="k8s-pod-network.1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" Workload="localhost-k8s-csi--node--driver--v8xmb-eth0" Jun 25 16:20:24.923914 containerd[1287]: 2024-06-25 16:20:24.918 [INFO][4705] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" HandleID="k8s-pod-network.1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" Workload="localhost-k8s-csi--node--driver--v8xmb-eth0" Jun 25 16:20:24.923914 containerd[1287]: 2024-06-25 16:20:24.920 [INFO][4705] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:20:24.923914 containerd[1287]: 2024-06-25 16:20:24.922 [INFO][4697] k8s.go 621: Teardown processing complete. ContainerID="1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2" Jun 25 16:20:24.924655 containerd[1287]: time="2024-06-25T16:20:24.923964139Z" level=info msg="TearDown network for sandbox \"1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2\" successfully" Jun 25 16:20:24.948177 containerd[1287]: time="2024-06-25T16:20:24.948127361Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 16:20:24.948371 containerd[1287]: time="2024-06-25T16:20:24.948194109Z" level=info msg="RemovePodSandbox \"1796fc33b23aeb795895c2805f2fc961581c423664611bb3c6265cc7916c0ea2\" returns successfully" Jun 25 16:20:25.046826 containerd[1287]: time="2024-06-25T16:20:25.046637425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:25.049110 containerd[1287]: time="2024-06-25T16:20:25.048796934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jun 25 16:20:25.050093 containerd[1287]: time="2024-06-25T16:20:25.050052386Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:25.059251 containerd[1287]: time="2024-06-25T16:20:25.054827169Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:25.059251 containerd[1287]: time="2024-06-25T16:20:25.057526604Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:25.059251 containerd[1287]: time="2024-06-25T16:20:25.058314979Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 2.690490223s" Jun 25 16:20:25.059251 containerd[1287]: time="2024-06-25T16:20:25.058362580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jun 25 16:20:25.061034 containerd[1287]: time="2024-06-25T16:20:25.060848707Z" level=info msg="CreateContainer within sandbox \"db82eb17e8c4994b3ae46565906ee83b464247dd57ddf3a08d35a7d5a08b753f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 16:20:25.078161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2402414219.mount: Deactivated successfully. Jun 25 16:20:25.085460 containerd[1287]: time="2024-06-25T16:20:25.085392669Z" level=info msg="CreateContainer within sandbox \"db82eb17e8c4994b3ae46565906ee83b464247dd57ddf3a08d35a7d5a08b753f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d4967f524e0dc440a25c3820114dfc74dede5527e4b90ad755c03551234eb58a\"" Jun 25 16:20:25.086311 containerd[1287]: time="2024-06-25T16:20:25.086206642Z" level=info msg="StartContainer for \"d4967f524e0dc440a25c3820114dfc74dede5527e4b90ad755c03551234eb58a\"" Jun 25 16:20:25.112474 systemd[1]: run-containerd-runc-k8s.io-d4967f524e0dc440a25c3820114dfc74dede5527e4b90ad755c03551234eb58a-runc.nbIj98.mount: Deactivated successfully. Jun 25 16:20:25.121401 systemd[1]: Started cri-containerd-d4967f524e0dc440a25c3820114dfc74dede5527e4b90ad755c03551234eb58a.scope - libcontainer container d4967f524e0dc440a25c3820114dfc74dede5527e4b90ad755c03551234eb58a. Jun 25 16:20:25.154000 audit: BPF prog-id=177 op=LOAD Jun 25 16:20:25.154000 audit[4728]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=3996 pid=4728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:25.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434393637663532346530646334343061323563333832303131346466 Jun 25 16:20:25.154000 audit: BPF prog-id=178 op=LOAD Jun 25 16:20:25.154000 audit[4728]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=3996 pid=4728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:25.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434393637663532346530646334343061323563333832303131346466 Jun 25 16:20:25.154000 audit: BPF prog-id=178 op=UNLOAD Jun 25 16:20:25.155000 audit: BPF prog-id=177 op=UNLOAD Jun 25 16:20:25.155000 audit: BPF prog-id=179 op=LOAD Jun 25 16:20:25.155000 audit[4728]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=3996 pid=4728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:25.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434393637663532346530646334343061323563333832303131346466 Jun 25 16:20:25.193807 containerd[1287]: time="2024-06-25T16:20:25.193737010Z" level=info msg="StartContainer for \"d4967f524e0dc440a25c3820114dfc74dede5527e4b90ad755c03551234eb58a\" returns successfully" Jun 25 16:20:25.194993 containerd[1287]: time="2024-06-25T16:20:25.194949990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 16:20:26.884589 containerd[1287]: time="2024-06-25T16:20:26.884527283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:26.901845 containerd[1287]: time="2024-06-25T16:20:26.901771209Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jun 25 16:20:26.911617 containerd[1287]: time="2024-06-25T16:20:26.911576445Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:26.929454 containerd[1287]: time="2024-06-25T16:20:26.929427767Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:26.943750 containerd[1287]: time="2024-06-25T16:20:26.943712011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:20:26.944394 containerd[1287]: time="2024-06-25T16:20:26.944338554Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 1.749335041s" Jun 25 16:20:26.944394 containerd[1287]: time="2024-06-25T16:20:26.944389131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jun 25 16:20:26.946948 containerd[1287]: time="2024-06-25T16:20:26.946910423Z" level=info msg="CreateContainer within sandbox \"db82eb17e8c4994b3ae46565906ee83b464247dd57ddf3a08d35a7d5a08b753f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 16:20:26.962859 containerd[1287]: time="2024-06-25T16:20:26.962795169Z" level=info msg="CreateContainer within sandbox \"db82eb17e8c4994b3ae46565906ee83b464247dd57ddf3a08d35a7d5a08b753f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"47d5a0e681eb311279fe0aeb1ef780c37399b6a5dc945c0a2b8bfea31cb4b1a4\"" Jun 25 16:20:26.963432 containerd[1287]: time="2024-06-25T16:20:26.963405020Z" level=info msg="StartContainer for \"47d5a0e681eb311279fe0aeb1ef780c37399b6a5dc945c0a2b8bfea31cb4b1a4\"" Jun 25 16:20:26.995425 systemd[1]: Started cri-containerd-47d5a0e681eb311279fe0aeb1ef780c37399b6a5dc945c0a2b8bfea31cb4b1a4.scope - libcontainer container 47d5a0e681eb311279fe0aeb1ef780c37399b6a5dc945c0a2b8bfea31cb4b1a4. Jun 25 16:20:27.005000 audit: BPF prog-id=180 op=LOAD Jun 25 16:20:27.005000 audit[4770]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=3996 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:27.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437643561306536383165623331313237396665306165623165663738 Jun 25 16:20:27.005000 audit: BPF prog-id=181 op=LOAD Jun 25 16:20:27.005000 audit[4770]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=3996 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:27.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437643561306536383165623331313237396665306165623165663738 Jun 25 16:20:27.005000 audit: BPF prog-id=181 op=UNLOAD Jun 25 16:20:27.005000 audit: BPF prog-id=180 op=UNLOAD Jun 25 16:20:27.005000 audit: BPF prog-id=182 op=LOAD Jun 25 16:20:27.005000 audit[4770]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=3996 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:27.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437643561306536383165623331313237396665306165623165663738 Jun 25 16:20:27.081385 containerd[1287]: time="2024-06-25T16:20:27.081323457Z" level=info msg="StartContainer for \"47d5a0e681eb311279fe0aeb1ef780c37399b6a5dc945c0a2b8bfea31cb4b1a4\" returns successfully" Jun 25 16:20:27.706026 kubelet[2281]: I0625 16:20:27.705975 2281 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 16:20:27.706026 kubelet[2281]: I0625 16:20:27.706011 2281 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 16:20:29.515526 systemd[1]: Started sshd@15-10.0.0.61:22-10.0.0.1:40902.service - OpenSSH per-connection server daemon (10.0.0.1:40902). Jun 25 16:20:29.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.61:22-10.0.0.1:40902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:29.516603 kernel: kauditd_printk_skb: 23 callbacks suppressed Jun 25 16:20:29.516660 kernel: audit: type=1130 audit(1719332429.514:678): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.61:22-10.0.0.1:40902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:29.547000 audit[4813]: USER_ACCT pid=4813 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:29.548702 sshd[4813]: Accepted publickey for core from 10.0.0.1 port 40902 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:20:29.549852 sshd[4813]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:20:29.548000 audit[4813]: CRED_ACQ pid=4813 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:29.553250 systemd-logind[1271]: New session 16 of user core. Jun 25 16:20:29.570516 kernel: audit: type=1101 audit(1719332429.547:679): pid=4813 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:29.570561 kernel: audit: type=1103 audit(1719332429.548:680): pid=4813 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:29.570591 kernel: audit: type=1006 audit(1719332429.548:681): pid=4813 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jun 25 16:20:29.570617 kernel: audit: type=1300 audit(1719332429.548:681): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc92154c30 a2=3 a3=7efd19b65480 items=0 ppid=1 pid=4813 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:29.570645 kernel: audit: type=1327 audit(1719332429.548:681): proctitle=737368643A20636F7265205B707269765D Jun 25 16:20:29.548000 audit[4813]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc92154c30 a2=3 a3=7efd19b65480 items=0 ppid=1 pid=4813 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:29.548000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:20:29.570463 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 16:20:29.572000 audit[4813]: USER_START pid=4813 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:29.574000 audit[4815]: CRED_ACQ pid=4815 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:29.580076 kernel: audit: type=1105 audit(1719332429.572:682): pid=4813 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:29.580130 kernel: audit: type=1103 audit(1719332429.574:683): pid=4815 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:29.684839 sshd[4813]: pam_unix(sshd:session): session closed for user core Jun 25 16:20:29.684000 audit[4813]: USER_END pid=4813 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:29.687344 systemd[1]: sshd@15-10.0.0.61:22-10.0.0.1:40902.service: Deactivated successfully. Jun 25 16:20:29.688209 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 16:20:29.688860 systemd-logind[1271]: Session 16 logged out. Waiting for processes to exit. Jun 25 16:20:29.689702 systemd-logind[1271]: Removed session 16. Jun 25 16:20:29.684000 audit[4813]: CRED_DISP pid=4813 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:29.692607 kernel: audit: type=1106 audit(1719332429.684:684): pid=4813 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:29.692670 kernel: audit: type=1104 audit(1719332429.684:685): pid=4813 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:29.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.61:22-10.0.0.1:40902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:34.417305 kubelet[2281]: E0625 16:20:34.417264 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:20:34.695134 systemd[1]: Started sshd@16-10.0.0.61:22-10.0.0.1:40906.service - OpenSSH per-connection server daemon (10.0.0.1:40906). Jun 25 16:20:34.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.61:22-10.0.0.1:40906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:34.696246 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:20:34.696315 kernel: audit: type=1130 audit(1719332434.694:687): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.61:22-10.0.0.1:40906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:34.720000 audit[4828]: USER_ACCT pid=4828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:34.721803 sshd[4828]: Accepted publickey for core from 10.0.0.1 port 40906 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:20:34.722770 sshd[4828]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:20:34.733264 kernel: audit: type=1101 audit(1719332434.720:688): pid=4828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:34.733399 kernel: audit: type=1103 audit(1719332434.721:689): pid=4828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:34.733427 kernel: audit: type=1006 audit(1719332434.721:690): pid=4828 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jun 25 16:20:34.733455 kernel: audit: type=1300 audit(1719332434.721:690): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffda43a6d00 a2=3 a3=7f336c267480 items=0 ppid=1 pid=4828 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:34.721000 audit[4828]: CRED_ACQ pid=4828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:34.721000 audit[4828]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffda43a6d00 a2=3 a3=7f336c267480 items=0 ppid=1 pid=4828 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:34.727667 systemd-logind[1271]: New session 17 of user core. Jun 25 16:20:34.737906 kernel: audit: type=1327 audit(1719332434.721:690): proctitle=737368643A20636F7265205B707269765D Jun 25 16:20:34.721000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:20:34.734452 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 16:20:34.741000 audit[4828]: USER_START pid=4828 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:34.748790 kernel: audit: type=1105 audit(1719332434.741:691): pid=4828 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:34.748834 kernel: audit: type=1103 audit(1719332434.742:692): pid=4846 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:34.742000 audit[4846]: CRED_ACQ pid=4846 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:34.858744 sshd[4828]: pam_unix(sshd:session): session closed for user core Jun 25 16:20:34.858000 audit[4828]: USER_END pid=4828 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:34.861850 systemd[1]: sshd@16-10.0.0.61:22-10.0.0.1:40906.service: Deactivated successfully. Jun 25 16:20:34.862702 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 16:20:34.863904 systemd-logind[1271]: Session 17 logged out. Waiting for processes to exit. Jun 25 16:20:34.864741 systemd-logind[1271]: Removed session 17. Jun 25 16:20:34.858000 audit[4828]: CRED_DISP pid=4828 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:34.873701 kernel: audit: type=1106 audit(1719332434.858:693): pid=4828 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:34.873752 kernel: audit: type=1104 audit(1719332434.858:694): pid=4828 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:34.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.61:22-10.0.0.1:40906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:37.273000 audit[2107]: AVC avc: denied { watch } for pid=2107 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6269 scontext=system_u:system_r:container_t:s0:c382,c828 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:20:37.273000 audit[2107]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0027e0ca0 a2=fc6 a3=0 items=0 ppid=1985 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c382,c828 key=(null) Jun 25 16:20:37.273000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:20:37.273000 audit[2107]: AVC avc: denied { watch } for pid=2107 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6269 scontext=system_u:system_r:container_t:s0:c382,c828 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:20:37.273000 audit[2107]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000d9a400 a2=fc6 a3=0 items=0 ppid=1985 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c382,c828 key=(null) Jun 25 16:20:37.273000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:20:37.274000 audit[2107]: AVC avc: denied { watch } for pid=2107 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6269 scontext=system_u:system_r:container_t:s0:c382,c828 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:20:37.274000 audit[2107]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c000d9a440 a2=fc6 a3=0 items=0 ppid=1985 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c382,c828 key=(null) Jun 25 16:20:37.274000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:20:37.274000 audit[2107]: AVC avc: denied { watch } for pid=2107 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=6269 scontext=system_u:system_r:container_t:s0:c382,c828 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 16:20:37.274000 audit[2107]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000d9a480 a2=fc6 a3=0 items=0 ppid=1985 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c382,c828 key=(null) Jun 25 16:20:37.274000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 16:20:39.869847 systemd[1]: Started sshd@17-10.0.0.61:22-10.0.0.1:54378.service - OpenSSH per-connection server daemon (10.0.0.1:54378). Jun 25 16:20:39.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.61:22-10.0.0.1:54378 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:39.870838 kernel: kauditd_printk_skb: 13 callbacks suppressed Jun 25 16:20:39.870891 kernel: audit: type=1130 audit(1719332439.869:700): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.61:22-10.0.0.1:54378 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:39.903000 audit[4872]: USER_ACCT pid=4872 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:39.904232 sshd[4872]: Accepted publickey for core from 10.0.0.1 port 54378 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:20:39.905780 sshd[4872]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:20:39.905000 audit[4872]: CRED_ACQ pid=4872 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:39.911184 systemd-logind[1271]: New session 18 of user core. Jun 25 16:20:39.913464 kernel: audit: type=1101 audit(1719332439.903:701): pid=4872 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:39.913553 kernel: audit: type=1103 audit(1719332439.905:702): pid=4872 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:39.913599 kernel: audit: type=1006 audit(1719332439.905:703): pid=4872 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jun 25 16:20:39.915681 kernel: audit: type=1300 audit(1719332439.905:703): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff6a6aedf0 a2=3 a3=7fadee135480 items=0 ppid=1 pid=4872 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.905000 audit[4872]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff6a6aedf0 a2=3 a3=7fadee135480 items=0 ppid=1 pid=4872 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:39.920755 kernel: audit: type=1327 audit(1719332439.905:703): proctitle=737368643A20636F7265205B707269765D Jun 25 16:20:39.905000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:20:39.924627 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 16:20:39.934000 audit[4872]: USER_START pid=4872 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:39.934000 audit[4874]: CRED_ACQ pid=4874 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:39.942716 kernel: audit: type=1105 audit(1719332439.934:704): pid=4872 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:39.942824 kernel: audit: type=1103 audit(1719332439.934:705): pid=4874 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:40.040409 sshd[4872]: pam_unix(sshd:session): session closed for user core Jun 25 16:20:40.041000 audit[4872]: USER_END pid=4872 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:40.042979 systemd[1]: sshd@17-10.0.0.61:22-10.0.0.1:54378.service: Deactivated successfully. Jun 25 16:20:40.043865 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 16:20:40.044402 systemd-logind[1271]: Session 18 logged out. Waiting for processes to exit. Jun 25 16:20:40.045060 systemd-logind[1271]: Removed session 18. Jun 25 16:20:40.041000 audit[4872]: CRED_DISP pid=4872 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:40.050172 kernel: audit: type=1106 audit(1719332440.041:706): pid=4872 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:40.050250 kernel: audit: type=1104 audit(1719332440.041:707): pid=4872 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:40.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.61:22-10.0.0.1:54378 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:45.052020 systemd[1]: Started sshd@18-10.0.0.61:22-10.0.0.1:54384.service - OpenSSH per-connection server daemon (10.0.0.1:54384). Jun 25 16:20:45.052375 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 16:20:45.052415 kernel: audit: type=1130 audit(1719332445.050:709): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.61:22-10.0.0.1:54384 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:45.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.61:22-10.0.0.1:54384 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:45.079000 audit[4889]: USER_ACCT pid=4889 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:45.081101 sshd[4889]: Accepted publickey for core from 10.0.0.1 port 54384 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:20:45.082263 sshd[4889]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:20:45.080000 audit[4889]: CRED_ACQ pid=4889 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:45.086935 systemd-logind[1271]: New session 19 of user core. Jun 25 16:20:45.087617 kernel: audit: type=1101 audit(1719332445.079:710): pid=4889 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:45.087655 kernel: audit: type=1103 audit(1719332445.080:711): pid=4889 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:45.087676 kernel: audit: type=1006 audit(1719332445.080:712): pid=4889 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jun 25 16:20:45.080000 audit[4889]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcb5a3ec10 a2=3 a3=7f7cac61d480 items=0 ppid=1 pid=4889 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:45.093091 kernel: audit: type=1300 audit(1719332445.080:712): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcb5a3ec10 a2=3 a3=7f7cac61d480 items=0 ppid=1 pid=4889 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:45.093161 kernel: audit: type=1327 audit(1719332445.080:712): proctitle=737368643A20636F7265205B707269765D Jun 25 16:20:45.080000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:20:45.099523 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 16:20:45.103000 audit[4889]: USER_START pid=4889 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:45.104000 audit[4891]: CRED_ACQ pid=4891 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:45.110623 kernel: audit: type=1105 audit(1719332445.103:713): pid=4889 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:45.110709 kernel: audit: type=1103 audit(1719332445.104:714): pid=4891 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:45.209523 sshd[4889]: pam_unix(sshd:session): session closed for user core Jun 25 16:20:45.209000 audit[4889]: USER_END pid=4889 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:45.209000 audit[4889]: CRED_DISP pid=4889 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:45.217159 kernel: audit: type=1106 audit(1719332445.209:715): pid=4889 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:45.217229 kernel: audit: type=1104 audit(1719332445.209:716): pid=4889 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:45.220862 systemd[1]: sshd@18-10.0.0.61:22-10.0.0.1:54384.service: Deactivated successfully. Jun 25 16:20:45.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.61:22-10.0.0.1:54384 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:45.221663 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 16:20:45.222292 systemd-logind[1271]: Session 19 logged out. Waiting for processes to exit. Jun 25 16:20:45.229936 systemd[1]: Started sshd@19-10.0.0.61:22-10.0.0.1:54390.service - OpenSSH per-connection server daemon (10.0.0.1:54390). Jun 25 16:20:45.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.61:22-10.0.0.1:54390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:45.230878 systemd-logind[1271]: Removed session 19. Jun 25 16:20:45.253000 audit[4902]: USER_ACCT pid=4902 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:45.255008 sshd[4902]: Accepted publickey for core from 10.0.0.1 port 54390 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:20:45.254000 audit[4902]: CRED_ACQ pid=4902 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:45.254000 audit[4902]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd931dba10 a2=3 a3=7f921f492480 items=0 ppid=1 pid=4902 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:45.254000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:20:45.256385 sshd[4902]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:20:45.260389 systemd-logind[1271]: New session 20 of user core. Jun 25 16:20:45.270363 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 16:20:45.274000 audit[4902]: USER_START pid=4902 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:45.276000 audit[4905]: CRED_ACQ pid=4905 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:45.687662 sshd[4902]: pam_unix(sshd:session): session closed for user core Jun 25 16:20:45.687000 audit[4902]: USER_END pid=4902 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:45.687000 audit[4902]: CRED_DISP pid=4902 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:45.701580 systemd[1]: sshd@19-10.0.0.61:22-10.0.0.1:54390.service: Deactivated successfully. Jun 25 16:20:45.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.61:22-10.0.0.1:54390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:45.702348 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 16:20:45.702918 systemd-logind[1271]: Session 20 logged out. Waiting for processes to exit. Jun 25 16:20:45.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.61:22-10.0.0.1:54400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:45.704699 systemd[1]: Started sshd@20-10.0.0.61:22-10.0.0.1:54400.service - OpenSSH per-connection server daemon (10.0.0.1:54400). Jun 25 16:20:45.706055 systemd-logind[1271]: Removed session 20. Jun 25 16:20:45.733000 audit[4915]: USER_ACCT pid=4915 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:45.735003 sshd[4915]: Accepted publickey for core from 10.0.0.1 port 54400 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:20:45.734000 audit[4915]: CRED_ACQ pid=4915 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:45.734000 audit[4915]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdf465b9d0 a2=3 a3=7f026e459480 items=0 ppid=1 pid=4915 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:45.734000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:20:45.736168 sshd[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:20:45.739357 systemd-logind[1271]: New session 21 of user core. Jun 25 16:20:45.745345 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 16:20:45.748000 audit[4915]: USER_START pid=4915 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:45.749000 audit[4917]: CRED_ACQ pid=4917 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:46.688000 audit[4930]: NETFILTER_CFG table=filter:115 family=2 entries=20 op=nft_register_rule pid=4930 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:46.688000 audit[4930]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7fffcf8e81e0 a2=0 a3=7fffcf8e81cc items=0 ppid=2426 pid=4930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:46.688000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:46.689000 audit[4930]: NETFILTER_CFG table=nat:116 family=2 entries=20 op=nft_register_rule pid=4930 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:46.689000 audit[4930]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffcf8e81e0 a2=0 a3=0 items=0 ppid=2426 pid=4930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:46.689000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:46.702000 audit[4932]: NETFILTER_CFG table=filter:117 family=2 entries=32 op=nft_register_rule pid=4932 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:46.702000 audit[4932]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7fff4d3314a0 a2=0 a3=7fff4d33148c items=0 ppid=2426 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:46.702000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:46.711434 sshd[4915]: pam_unix(sshd:session): session closed for user core Jun 25 16:20:46.703000 audit[4932]: NETFILTER_CFG table=nat:118 family=2 entries=20 op=nft_register_rule pid=4932 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:46.703000 audit[4932]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff4d3314a0 a2=0 a3=0 items=0 ppid=2426 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:46.703000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:46.713000 audit[4915]: USER_END pid=4915 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:46.713000 audit[4915]: CRED_DISP pid=4915 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:46.718764 systemd[1]: sshd@20-10.0.0.61:22-10.0.0.1:54400.service: Deactivated successfully. Jun 25 16:20:46.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.61:22-10.0.0.1:54400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:46.719376 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 16:20:46.719929 systemd-logind[1271]: Session 21 logged out. Waiting for processes to exit. Jun 25 16:20:46.721402 systemd[1]: Started sshd@21-10.0.0.61:22-10.0.0.1:56844.service - OpenSSH per-connection server daemon (10.0.0.1:56844). Jun 25 16:20:46.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.61:22-10.0.0.1:56844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:46.722158 systemd-logind[1271]: Removed session 21. Jun 25 16:20:46.748000 audit[4935]: USER_ACCT pid=4935 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:46.750306 sshd[4935]: Accepted publickey for core from 10.0.0.1 port 56844 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:20:46.749000 audit[4935]: CRED_ACQ pid=4935 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:46.749000 audit[4935]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffedf666ef0 a2=3 a3=7f8aedcba480 items=0 ppid=1 pid=4935 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:46.749000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:20:46.751685 sshd[4935]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:20:46.755418 systemd-logind[1271]: New session 22 of user core. Jun 25 16:20:46.764342 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 16:20:46.767000 audit[4935]: USER_START pid=4935 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:46.768000 audit[4937]: CRED_ACQ pid=4937 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:47.222680 sshd[4935]: pam_unix(sshd:session): session closed for user core Jun 25 16:20:47.222000 audit[4935]: USER_END pid=4935 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:47.222000 audit[4935]: CRED_DISP pid=4935 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:47.229812 systemd[1]: sshd@21-10.0.0.61:22-10.0.0.1:56844.service: Deactivated successfully. Jun 25 16:20:47.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.61:22-10.0.0.1:56844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:47.230487 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 16:20:47.231133 systemd-logind[1271]: Session 22 logged out. Waiting for processes to exit. Jun 25 16:20:47.240977 systemd[1]: Started sshd@22-10.0.0.61:22-10.0.0.1:56850.service - OpenSSH per-connection server daemon (10.0.0.1:56850). Jun 25 16:20:47.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.61:22-10.0.0.1:56850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:47.242109 systemd-logind[1271]: Removed session 22. Jun 25 16:20:47.268000 audit[4947]: USER_ACCT pid=4947 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:47.270377 sshd[4947]: Accepted publickey for core from 10.0.0.1 port 56850 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:20:47.269000 audit[4947]: CRED_ACQ pid=4947 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:47.270000 audit[4947]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffdcc12ff0 a2=3 a3=7fabfb68d480 items=0 ppid=1 pid=4947 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:47.270000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:20:47.271550 sshd[4947]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:20:47.275325 systemd-logind[1271]: New session 23 of user core. Jun 25 16:20:47.286403 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 16:20:47.289000 audit[4947]: USER_START pid=4947 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:47.290000 audit[4949]: CRED_ACQ pid=4949 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:47.397913 sshd[4947]: pam_unix(sshd:session): session closed for user core Jun 25 16:20:47.397000 audit[4947]: USER_END pid=4947 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:47.397000 audit[4947]: CRED_DISP pid=4947 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:47.400686 systemd[1]: sshd@22-10.0.0.61:22-10.0.0.1:56850.service: Deactivated successfully. Jun 25 16:20:47.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.61:22-10.0.0.1:56850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:47.401427 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 16:20:47.402008 systemd-logind[1271]: Session 23 logged out. Waiting for processes to exit. Jun 25 16:20:47.402762 systemd-logind[1271]: Removed session 23. Jun 25 16:20:52.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.61:22-10.0.0.1:56856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:52.416764 systemd[1]: Started sshd@23-10.0.0.61:22-10.0.0.1:56856.service - OpenSSH per-connection server daemon (10.0.0.1:56856). Jun 25 16:20:52.417753 kernel: kauditd_printk_skb: 57 callbacks suppressed Jun 25 16:20:52.417979 kernel: audit: type=1130 audit(1719332452.415:758): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.61:22-10.0.0.1:56856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:52.445000 audit[4987]: USER_ACCT pid=4987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:52.447344 sshd[4987]: Accepted publickey for core from 10.0.0.1 port 56856 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:20:52.448165 sshd[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:20:52.446000 audit[4987]: CRED_ACQ pid=4987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:52.452689 kernel: audit: type=1101 audit(1719332452.445:759): pid=4987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:52.452749 kernel: audit: type=1103 audit(1719332452.446:760): pid=4987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:52.452784 kernel: audit: type=1006 audit(1719332452.446:761): pid=4987 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jun 25 16:20:52.455258 kernel: audit: type=1300 audit(1719332452.446:761): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffacad09b0 a2=3 a3=7ff1dcdf4480 items=0 ppid=1 pid=4987 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:52.446000 audit[4987]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffacad09b0 a2=3 a3=7ff1dcdf4480 items=0 ppid=1 pid=4987 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:52.454051 systemd-logind[1271]: New session 24 of user core. Jun 25 16:20:52.446000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:20:52.459153 kernel: audit: type=1327 audit(1719332452.446:761): proctitle=737368643A20636F7265205B707269765D Jun 25 16:20:52.464568 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 16:20:52.469000 audit[4987]: USER_START pid=4987 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:52.471000 audit[4989]: CRED_ACQ pid=4989 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:52.476555 kernel: audit: type=1105 audit(1719332452.469:762): pid=4987 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:52.476657 kernel: audit: type=1103 audit(1719332452.471:763): pid=4989 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:52.571789 sshd[4987]: pam_unix(sshd:session): session closed for user core Jun 25 16:20:52.571000 audit[4987]: USER_END pid=4987 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:52.572000 audit[4987]: CRED_DISP pid=4987 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:52.577166 systemd[1]: sshd@23-10.0.0.61:22-10.0.0.1:56856.service: Deactivated successfully. Jun 25 16:20:52.577971 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 16:20:52.578643 systemd-logind[1271]: Session 24 logged out. Waiting for processes to exit. Jun 25 16:20:52.579400 kernel: audit: type=1106 audit(1719332452.571:764): pid=4987 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:52.579456 kernel: audit: type=1104 audit(1719332452.572:765): pid=4987 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:52.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.61:22-10.0.0.1:56856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:52.579653 systemd-logind[1271]: Removed session 24. Jun 25 16:20:52.909000 audit[5000]: NETFILTER_CFG table=filter:119 family=2 entries=20 op=nft_register_rule pid=5000 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:52.909000 audit[5000]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe75584560 a2=0 a3=7ffe7558454c items=0 ppid=2426 pid=5000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:52.909000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:52.912000 audit[5000]: NETFILTER_CFG table=nat:120 family=2 entries=104 op=nft_register_chain pid=5000 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:52.912000 audit[5000]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffe75584560 a2=0 a3=7ffe7558454c items=0 ppid=2426 pid=5000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:52.912000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:54.417284 kubelet[2281]: E0625 16:20:54.417241 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:20:57.585620 systemd[1]: Started sshd@24-10.0.0.61:22-10.0.0.1:55818.service - OpenSSH per-connection server daemon (10.0.0.1:55818). Jun 25 16:20:57.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.61:22-10.0.0.1:55818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:57.586558 kernel: kauditd_printk_skb: 7 callbacks suppressed Jun 25 16:20:57.586614 kernel: audit: type=1130 audit(1719332457.584:769): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.61:22-10.0.0.1:55818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:57.610000 audit[5031]: USER_ACCT pid=5031 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:57.612168 sshd[5031]: Accepted publickey for core from 10.0.0.1 port 55818 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:20:57.613908 sshd[5031]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:20:57.612000 audit[5031]: CRED_ACQ pid=5031 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:57.618310 kernel: audit: type=1101 audit(1719332457.610:770): pid=5031 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:57.618428 kernel: audit: type=1103 audit(1719332457.612:771): pid=5031 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:57.618452 kernel: audit: type=1006 audit(1719332457.612:772): pid=5031 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jun 25 16:20:57.618765 systemd-logind[1271]: New session 25 of user core. Jun 25 16:20:57.612000 audit[5031]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc5374d300 a2=3 a3=7fd71f5a1480 items=0 ppid=1 pid=5031 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:57.623797 kernel: audit: type=1300 audit(1719332457.612:772): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc5374d300 a2=3 a3=7fd71f5a1480 items=0 ppid=1 pid=5031 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:57.623872 kernel: audit: type=1327 audit(1719332457.612:772): proctitle=737368643A20636F7265205B707269765D Jun 25 16:20:57.612000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:20:57.636536 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 16:20:57.640000 audit[5031]: USER_START pid=5031 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:57.642000 audit[5033]: CRED_ACQ pid=5033 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:57.648447 kernel: audit: type=1105 audit(1719332457.640:773): pid=5031 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:57.648492 kernel: audit: type=1103 audit(1719332457.642:774): pid=5033 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:57.737774 sshd[5031]: pam_unix(sshd:session): session closed for user core Jun 25 16:20:57.737000 audit[5031]: USER_END pid=5031 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:57.740100 systemd[1]: sshd@24-10.0.0.61:22-10.0.0.1:55818.service: Deactivated successfully. Jun 25 16:20:57.741048 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 16:20:57.741887 systemd-logind[1271]: Session 25 logged out. Waiting for processes to exit. Jun 25 16:20:57.737000 audit[5031]: CRED_DISP pid=5031 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:57.742931 systemd-logind[1271]: Removed session 25. Jun 25 16:20:57.745998 kernel: audit: type=1106 audit(1719332457.737:775): pid=5031 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:57.746068 kernel: audit: type=1104 audit(1719332457.737:776): pid=5031 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:20:57.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.61:22-10.0.0.1:55818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:20:59.269524 kubelet[2281]: I0625 16:20:59.269467 2281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-v8xmb" podStartSLOduration=62.103239013 podCreationTimestamp="2024-06-25 16:19:49 +0000 UTC" firstStartedPulling="2024-06-25 16:20:18.778470967 +0000 UTC m=+55.439514819" lastFinishedPulling="2024-06-25 16:20:26.94465941 +0000 UTC m=+63.605703272" observedRunningTime="2024-06-25 16:20:28.095663621 +0000 UTC m=+64.756707474" watchObservedRunningTime="2024-06-25 16:20:59.269427466 +0000 UTC m=+95.930471318" Jun 25 16:20:59.269937 kubelet[2281]: I0625 16:20:59.269789 2281 topology_manager.go:215] "Topology Admit Handler" podUID="02699f22-ecba-4bef-ae79-896035a2ed08" podNamespace="calico-apiserver" podName="calico-apiserver-7bfc9cbd7-7xx5v" Jun 25 16:20:59.274506 systemd[1]: Created slice kubepods-besteffort-pod02699f22_ecba_4bef_ae79_896035a2ed08.slice - libcontainer container kubepods-besteffort-pod02699f22_ecba_4bef_ae79_896035a2ed08.slice. Jun 25 16:20:59.276604 kubelet[2281]: W0625 16:20:59.276575 2281 reflector.go:535] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object Jun 25 16:20:59.276604 kubelet[2281]: E0625 16:20:59.276605 2281 reflector.go:147] object-"calico-apiserver"/"calico-apiserver-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object Jun 25 16:20:59.286000 audit[5049]: NETFILTER_CFG table=filter:121 family=2 entries=9 op=nft_register_rule pid=5049 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:59.286000 audit[5049]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe97e070a0 a2=0 a3=7ffe97e0708c items=0 ppid=2426 pid=5049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:59.286000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:59.288000 audit[5049]: NETFILTER_CFG table=nat:122 family=2 entries=44 op=nft_register_rule pid=5049 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:59.288000 audit[5049]: SYSCALL arch=c000003e syscall=46 success=yes exit=14988 a0=3 a1=7ffe97e070a0 a2=0 a3=7ffe97e0708c items=0 ppid=2426 pid=5049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:59.288000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:59.300000 audit[5051]: NETFILTER_CFG table=filter:123 family=2 entries=10 op=nft_register_rule pid=5051 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:59.300000 audit[5051]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffdac2c7430 a2=0 a3=7ffdac2c741c items=0 ppid=2426 pid=5051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:59.300000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:59.303000 audit[5051]: NETFILTER_CFG table=nat:124 family=2 entries=44 op=nft_register_rule pid=5051 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:20:59.303000 audit[5051]: SYSCALL arch=c000003e syscall=46 success=yes exit=14988 a0=3 a1=7ffdac2c7430 a2=0 a3=7ffdac2c741c items=0 ppid=2426 pid=5051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:20:59.303000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:20:59.376671 kubelet[2281]: I0625 16:20:59.376621 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g42rs\" (UniqueName: \"kubernetes.io/projected/02699f22-ecba-4bef-ae79-896035a2ed08-kube-api-access-g42rs\") pod \"calico-apiserver-7bfc9cbd7-7xx5v\" (UID: \"02699f22-ecba-4bef-ae79-896035a2ed08\") " pod="calico-apiserver/calico-apiserver-7bfc9cbd7-7xx5v" Jun 25 16:20:59.376671 kubelet[2281]: I0625 16:20:59.376666 2281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/02699f22-ecba-4bef-ae79-896035a2ed08-calico-apiserver-certs\") pod \"calico-apiserver-7bfc9cbd7-7xx5v\" (UID: \"02699f22-ecba-4bef-ae79-896035a2ed08\") " pod="calico-apiserver/calico-apiserver-7bfc9cbd7-7xx5v" Jun 25 16:21:00.478075 kubelet[2281]: E0625 16:21:00.478020 2281 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jun 25 16:21:00.488999 kubelet[2281]: E0625 16:21:00.488957 2281 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02699f22-ecba-4bef-ae79-896035a2ed08-calico-apiserver-certs podName:02699f22-ecba-4bef-ae79-896035a2ed08 nodeName:}" failed. No retries permitted until 2024-06-25 16:21:00.978127906 +0000 UTC m=+97.639171758 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/02699f22-ecba-4bef-ae79-896035a2ed08-calico-apiserver-certs") pod "calico-apiserver-7bfc9cbd7-7xx5v" (UID: "02699f22-ecba-4bef-ae79-896035a2ed08") : failed to sync secret cache: timed out waiting for the condition Jun 25 16:21:01.078492 containerd[1287]: time="2024-06-25T16:21:01.078438267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bfc9cbd7-7xx5v,Uid:02699f22-ecba-4bef-ae79-896035a2ed08,Namespace:calico-apiserver,Attempt:0,}" Jun 25 16:21:01.417883 kubelet[2281]: E0625 16:21:01.417689 2281 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 25 16:21:02.060926 systemd-networkd[1115]: cali38895e2162e: Link UP Jun 25 16:21:02.062610 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 16:21:02.062753 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali38895e2162e: link becomes ready Jun 25 16:21:02.062924 systemd-networkd[1115]: cali38895e2162e: Gained carrier Jun 25 16:21:02.304796 containerd[1287]: 2024-06-25 16:21:01.984 [INFO][5056] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7bfc9cbd7--7xx5v-eth0 calico-apiserver-7bfc9cbd7- calico-apiserver 02699f22-ecba-4bef-ae79-896035a2ed08 1175 0 2024-06-25 16:20:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bfc9cbd7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7bfc9cbd7-7xx5v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali38895e2162e [] []}} ContainerID="4c44d4b8c521f21b55771d334bda21ec5a3734b9a2710c5680b0a397f7f7c7b7" Namespace="calico-apiserver" Pod="calico-apiserver-7bfc9cbd7-7xx5v" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bfc9cbd7--7xx5v-" Jun 25 16:21:02.304796 containerd[1287]: 2024-06-25 16:21:01.984 [INFO][5056] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4c44d4b8c521f21b55771d334bda21ec5a3734b9a2710c5680b0a397f7f7c7b7" Namespace="calico-apiserver" Pod="calico-apiserver-7bfc9cbd7-7xx5v" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bfc9cbd7--7xx5v-eth0" Jun 25 16:21:02.304796 containerd[1287]: 2024-06-25 16:21:02.012 [INFO][5071] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c44d4b8c521f21b55771d334bda21ec5a3734b9a2710c5680b0a397f7f7c7b7" HandleID="k8s-pod-network.4c44d4b8c521f21b55771d334bda21ec5a3734b9a2710c5680b0a397f7f7c7b7" Workload="localhost-k8s-calico--apiserver--7bfc9cbd7--7xx5v-eth0" Jun 25 16:21:02.304796 containerd[1287]: 2024-06-25 16:21:02.019 [INFO][5071] ipam_plugin.go 264: Auto assigning IP ContainerID="4c44d4b8c521f21b55771d334bda21ec5a3734b9a2710c5680b0a397f7f7c7b7" HandleID="k8s-pod-network.4c44d4b8c521f21b55771d334bda21ec5a3734b9a2710c5680b0a397f7f7c7b7" Workload="localhost-k8s-calico--apiserver--7bfc9cbd7--7xx5v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000308490), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7bfc9cbd7-7xx5v", "timestamp":"2024-06-25 16:21:02.01240041 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 16:21:02.304796 containerd[1287]: 2024-06-25 16:21:02.019 [INFO][5071] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 16:21:02.304796 containerd[1287]: 2024-06-25 16:21:02.020 [INFO][5071] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 16:21:02.304796 containerd[1287]: 2024-06-25 16:21:02.020 [INFO][5071] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 25 16:21:02.304796 containerd[1287]: 2024-06-25 16:21:02.022 [INFO][5071] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4c44d4b8c521f21b55771d334bda21ec5a3734b9a2710c5680b0a397f7f7c7b7" host="localhost" Jun 25 16:21:02.304796 containerd[1287]: 2024-06-25 16:21:02.025 [INFO][5071] ipam.go 372: Looking up existing affinities for host host="localhost" Jun 25 16:21:02.304796 containerd[1287]: 2024-06-25 16:21:02.029 [INFO][5071] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jun 25 16:21:02.304796 containerd[1287]: 2024-06-25 16:21:02.031 [INFO][5071] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 25 16:21:02.304796 containerd[1287]: 2024-06-25 16:21:02.033 [INFO][5071] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 25 16:21:02.304796 containerd[1287]: 2024-06-25 16:21:02.033 [INFO][5071] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4c44d4b8c521f21b55771d334bda21ec5a3734b9a2710c5680b0a397f7f7c7b7" host="localhost" Jun 25 16:21:02.304796 containerd[1287]: 2024-06-25 16:21:02.034 [INFO][5071] ipam.go 1685: Creating new handle: k8s-pod-network.4c44d4b8c521f21b55771d334bda21ec5a3734b9a2710c5680b0a397f7f7c7b7 Jun 25 16:21:02.304796 containerd[1287]: 2024-06-25 16:21:02.037 [INFO][5071] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4c44d4b8c521f21b55771d334bda21ec5a3734b9a2710c5680b0a397f7f7c7b7" host="localhost" Jun 25 16:21:02.304796 containerd[1287]: 2024-06-25 16:21:02.057 [INFO][5071] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.4c44d4b8c521f21b55771d334bda21ec5a3734b9a2710c5680b0a397f7f7c7b7" host="localhost" Jun 25 16:21:02.304796 containerd[1287]: 2024-06-25 16:21:02.057 [INFO][5071] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.4c44d4b8c521f21b55771d334bda21ec5a3734b9a2710c5680b0a397f7f7c7b7" host="localhost" Jun 25 16:21:02.304796 containerd[1287]: 2024-06-25 16:21:02.057 [INFO][5071] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 16:21:02.304796 containerd[1287]: 2024-06-25 16:21:02.057 [INFO][5071] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="4c44d4b8c521f21b55771d334bda21ec5a3734b9a2710c5680b0a397f7f7c7b7" HandleID="k8s-pod-network.4c44d4b8c521f21b55771d334bda21ec5a3734b9a2710c5680b0a397f7f7c7b7" Workload="localhost-k8s-calico--apiserver--7bfc9cbd7--7xx5v-eth0" Jun 25 16:21:02.305801 containerd[1287]: 2024-06-25 16:21:02.059 [INFO][5056] k8s.go 386: Populated endpoint ContainerID="4c44d4b8c521f21b55771d334bda21ec5a3734b9a2710c5680b0a397f7f7c7b7" Namespace="calico-apiserver" Pod="calico-apiserver-7bfc9cbd7-7xx5v" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bfc9cbd7--7xx5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bfc9cbd7--7xx5v-eth0", GenerateName:"calico-apiserver-7bfc9cbd7-", Namespace:"calico-apiserver", SelfLink:"", UID:"02699f22-ecba-4bef-ae79-896035a2ed08", ResourceVersion:"1175", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 20, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bfc9cbd7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7bfc9cbd7-7xx5v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali38895e2162e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:21:02.305801 containerd[1287]: 2024-06-25 16:21:02.059 [INFO][5056] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="4c44d4b8c521f21b55771d334bda21ec5a3734b9a2710c5680b0a397f7f7c7b7" Namespace="calico-apiserver" Pod="calico-apiserver-7bfc9cbd7-7xx5v" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bfc9cbd7--7xx5v-eth0" Jun 25 16:21:02.305801 containerd[1287]: 2024-06-25 16:21:02.059 [INFO][5056] dataplane_linux.go 68: Setting the host side veth name to cali38895e2162e ContainerID="4c44d4b8c521f21b55771d334bda21ec5a3734b9a2710c5680b0a397f7f7c7b7" Namespace="calico-apiserver" Pod="calico-apiserver-7bfc9cbd7-7xx5v" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bfc9cbd7--7xx5v-eth0" Jun 25 16:21:02.305801 containerd[1287]: 2024-06-25 16:21:02.062 [INFO][5056] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4c44d4b8c521f21b55771d334bda21ec5a3734b9a2710c5680b0a397f7f7c7b7" Namespace="calico-apiserver" Pod="calico-apiserver-7bfc9cbd7-7xx5v" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bfc9cbd7--7xx5v-eth0" Jun 25 16:21:02.305801 containerd[1287]: 2024-06-25 16:21:02.063 [INFO][5056] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4c44d4b8c521f21b55771d334bda21ec5a3734b9a2710c5680b0a397f7f7c7b7" Namespace="calico-apiserver" Pod="calico-apiserver-7bfc9cbd7-7xx5v" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bfc9cbd7--7xx5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7bfc9cbd7--7xx5v-eth0", GenerateName:"calico-apiserver-7bfc9cbd7-", Namespace:"calico-apiserver", SelfLink:"", UID:"02699f22-ecba-4bef-ae79-896035a2ed08", ResourceVersion:"1175", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 16, 20, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bfc9cbd7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c44d4b8c521f21b55771d334bda21ec5a3734b9a2710c5680b0a397f7f7c7b7", Pod:"calico-apiserver-7bfc9cbd7-7xx5v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali38895e2162e", MAC:"2e:73:8d:4e:29:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 16:21:02.305801 containerd[1287]: 2024-06-25 16:21:02.302 [INFO][5056] k8s.go 500: Wrote updated endpoint to datastore ContainerID="4c44d4b8c521f21b55771d334bda21ec5a3734b9a2710c5680b0a397f7f7c7b7" Namespace="calico-apiserver" Pod="calico-apiserver-7bfc9cbd7-7xx5v" WorkloadEndpoint="localhost-k8s-calico--apiserver--7bfc9cbd7--7xx5v-eth0" Jun 25 16:21:02.314000 audit[5094]: NETFILTER_CFG table=filter:125 family=2 entries=55 op=nft_register_chain pid=5094 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 16:21:02.314000 audit[5094]: SYSCALL arch=c000003e syscall=46 success=yes exit=27464 a0=3 a1=7fff1d60ced0 a2=0 a3=7fff1d60cebc items=0 ppid=3705 pid=5094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:02.314000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 16:21:02.603483 containerd[1287]: time="2024-06-25T16:21:02.603357082Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 16:21:02.603483 containerd[1287]: time="2024-06-25T16:21:02.603450639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:21:02.603483 containerd[1287]: time="2024-06-25T16:21:02.603472199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 16:21:02.603722 containerd[1287]: time="2024-06-25T16:21:02.603490675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 16:21:02.627391 systemd[1]: Started cri-containerd-4c44d4b8c521f21b55771d334bda21ec5a3734b9a2710c5680b0a397f7f7c7b7.scope - libcontainer container 4c44d4b8c521f21b55771d334bda21ec5a3734b9a2710c5680b0a397f7f7c7b7. Jun 25 16:21:02.634000 audit: BPF prog-id=183 op=LOAD Jun 25 16:21:02.636521 kernel: kauditd_printk_skb: 16 callbacks suppressed Jun 25 16:21:02.636602 kernel: audit: type=1334 audit(1719332462.634:783): prog-id=183 op=LOAD Jun 25 16:21:02.635000 audit: BPF prog-id=184 op=LOAD Jun 25 16:21:02.637460 systemd-resolved[1226]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 25 16:21:02.635000 audit[5112]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=5103 pid=5112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:02.641469 kernel: audit: type=1334 audit(1719332462.635:784): prog-id=184 op=LOAD Jun 25 16:21:02.641652 kernel: audit: type=1300 audit(1719332462.635:784): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=5103 pid=5112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:02.641702 kernel: audit: type=1327 audit(1719332462.635:784): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463343464346238633532316632316235353737316433333462646132 Jun 25 16:21:02.635000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463343464346238633532316632316235353737316433333462646132 Jun 25 16:21:02.635000 audit: BPF prog-id=185 op=LOAD Jun 25 16:21:02.645605 kernel: audit: type=1334 audit(1719332462.635:785): prog-id=185 op=LOAD Jun 25 16:21:02.645840 kernel: audit: type=1300 audit(1719332462.635:785): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=5103 pid=5112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:02.635000 audit[5112]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=5103 pid=5112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:02.649242 kernel: audit: type=1327 audit(1719332462.635:785): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463343464346238633532316632316235353737316433333462646132 Jun 25 16:21:02.635000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463343464346238633532316632316235353737316433333462646132 Jun 25 16:21:02.658434 kernel: audit: type=1334 audit(1719332462.635:786): prog-id=185 op=UNLOAD Jun 25 16:21:02.635000 audit: BPF prog-id=185 op=UNLOAD Jun 25 16:21:02.635000 audit: BPF prog-id=184 op=UNLOAD Jun 25 16:21:02.635000 audit: BPF prog-id=186 op=LOAD Jun 25 16:21:02.661453 kernel: audit: type=1334 audit(1719332462.635:787): prog-id=184 op=UNLOAD Jun 25 16:21:02.661506 kernel: audit: type=1334 audit(1719332462.635:788): prog-id=186 op=LOAD Jun 25 16:21:02.635000 audit[5112]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=5103 pid=5112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:02.635000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463343464346238633532316632316235353737316433333462646132 Jun 25 16:21:02.670427 containerd[1287]: time="2024-06-25T16:21:02.670380921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bfc9cbd7-7xx5v,Uid:02699f22-ecba-4bef-ae79-896035a2ed08,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4c44d4b8c521f21b55771d334bda21ec5a3734b9a2710c5680b0a397f7f7c7b7\"" Jun 25 16:21:02.672447 containerd[1287]: time="2024-06-25T16:21:02.672405055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 16:21:02.749268 systemd[1]: Started sshd@25-10.0.0.61:22-10.0.0.1:55830.service - OpenSSH per-connection server daemon (10.0.0.1:55830). Jun 25 16:21:02.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.61:22-10.0.0.1:55830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:21:02.778000 audit[5137]: USER_ACCT pid=5137 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:21:02.779861 sshd[5137]: Accepted publickey for core from 10.0.0.1 port 55830 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:21:02.779000 audit[5137]: CRED_ACQ pid=5137 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:21:02.779000 audit[5137]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd662cd2e0 a2=3 a3=7f123ef49480 items=0 ppid=1 pid=5137 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:02.779000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:21:02.781227 sshd[5137]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:21:02.785422 systemd-logind[1271]: New session 26 of user core. Jun 25 16:21:02.792352 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 16:21:02.795000 audit[5137]: USER_START pid=5137 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:21:02.796000 audit[5139]: CRED_ACQ pid=5139 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:21:02.891969 sshd[5137]: pam_unix(sshd:session): session closed for user core Jun 25 16:21:02.891000 audit[5137]: USER_END pid=5137 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:21:02.892000 audit[5137]: CRED_DISP pid=5137 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:21:02.894394 systemd[1]: sshd@25-10.0.0.61:22-10.0.0.1:55830.service: Deactivated successfully. Jun 25 16:21:02.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.61:22-10.0.0.1:55830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:21:02.895212 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 16:21:02.895749 systemd-logind[1271]: Session 26 logged out. Waiting for processes to exit. Jun 25 16:21:02.896546 systemd-logind[1271]: Removed session 26. Jun 25 16:21:03.463524 systemd-networkd[1115]: cali38895e2162e: Gained IPv6LL Jun 25 16:21:05.704670 containerd[1287]: time="2024-06-25T16:21:05.704572596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:21:05.705471 containerd[1287]: time="2024-06-25T16:21:05.705379123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jun 25 16:21:05.707027 containerd[1287]: time="2024-06-25T16:21:05.706933727Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:21:05.710792 containerd[1287]: time="2024-06-25T16:21:05.710743389Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:21:05.714258 containerd[1287]: time="2024-06-25T16:21:05.714202047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 16:21:05.715064 containerd[1287]: time="2024-06-25T16:21:05.714917301Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 3.042262142s" Jun 25 16:21:05.715064 containerd[1287]: time="2024-06-25T16:21:05.714962396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 16:21:05.716797 containerd[1287]: time="2024-06-25T16:21:05.716751864Z" level=info msg="CreateContainer within sandbox \"4c44d4b8c521f21b55771d334bda21ec5a3734b9a2710c5680b0a397f7f7c7b7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 16:21:05.740567 containerd[1287]: time="2024-06-25T16:21:05.740483782Z" level=info msg="CreateContainer within sandbox \"4c44d4b8c521f21b55771d334bda21ec5a3734b9a2710c5680b0a397f7f7c7b7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d2e2a34b066380efe9547fa8e093d61e01aa455bdb3f9d7617667e5d06ef8a9e\"" Jun 25 16:21:05.741045 containerd[1287]: time="2024-06-25T16:21:05.741026319Z" level=info msg="StartContainer for \"d2e2a34b066380efe9547fa8e093d61e01aa455bdb3f9d7617667e5d06ef8a9e\"" Jun 25 16:21:05.769509 systemd[1]: Started cri-containerd-d2e2a34b066380efe9547fa8e093d61e01aa455bdb3f9d7617667e5d06ef8a9e.scope - libcontainer container d2e2a34b066380efe9547fa8e093d61e01aa455bdb3f9d7617667e5d06ef8a9e. Jun 25 16:21:05.786000 audit: BPF prog-id=187 op=LOAD Jun 25 16:21:05.787000 audit: BPF prog-id=188 op=LOAD Jun 25 16:21:05.787000 audit[5186]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=5103 pid=5186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:05.787000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432653261333462303636333830656665393534376661386530393364 Jun 25 16:21:05.787000 audit: BPF prog-id=189 op=LOAD Jun 25 16:21:05.787000 audit[5186]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=5103 pid=5186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:05.787000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432653261333462303636333830656665393534376661386530393364 Jun 25 16:21:05.787000 audit: BPF prog-id=189 op=UNLOAD Jun 25 16:21:05.787000 audit: BPF prog-id=188 op=UNLOAD Jun 25 16:21:05.787000 audit: BPF prog-id=190 op=LOAD Jun 25 16:21:05.787000 audit[5186]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=5103 pid=5186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:05.787000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432653261333462303636333830656665393534376661386530393364 Jun 25 16:21:05.941980 containerd[1287]: time="2024-06-25T16:21:05.941892669Z" level=info msg="StartContainer for \"d2e2a34b066380efe9547fa8e093d61e01aa455bdb3f9d7617667e5d06ef8a9e\" returns successfully" Jun 25 16:21:06.166000 audit[5218]: NETFILTER_CFG table=filter:126 family=2 entries=10 op=nft_register_rule pid=5218 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:21:06.166000 audit[5218]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fff98498f20 a2=0 a3=7fff98498f0c items=0 ppid=2426 pid=5218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:06.166000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:21:06.168000 audit[5218]: NETFILTER_CFG table=nat:127 family=2 entries=44 op=nft_register_rule pid=5218 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:21:06.168000 audit[5218]: SYSCALL arch=c000003e syscall=46 success=yes exit=14988 a0=3 a1=7fff98498f20 a2=0 a3=7fff98498f0c items=0 ppid=2426 pid=5218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:06.168000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:21:06.204000 audit[5220]: NETFILTER_CFG table=filter:128 family=2 entries=9 op=nft_register_rule pid=5220 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:21:06.204000 audit[5220]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe1b4509d0 a2=0 a3=7ffe1b4509bc items=0 ppid=2426 pid=5220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:06.204000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:21:06.206000 audit[5220]: NETFILTER_CFG table=nat:129 family=2 entries=51 op=nft_register_chain pid=5220 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:21:06.206000 audit[5220]: SYSCALL arch=c000003e syscall=46 success=yes exit=18564 a0=3 a1=7ffe1b4509d0 a2=0 a3=7ffe1b4509bc items=0 ppid=2426 pid=5220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:06.206000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:21:06.221011 kubelet[2281]: I0625 16:21:06.220970 2281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7bfc9cbd7-7xx5v" podStartSLOduration=4.177612451 podCreationTimestamp="2024-06-25 16:20:59 +0000 UTC" firstStartedPulling="2024-06-25 16:21:02.671972145 +0000 UTC m=+99.333015997" lastFinishedPulling="2024-06-25 16:21:05.715282873 +0000 UTC m=+102.376326725" observedRunningTime="2024-06-25 16:21:06.219509443 +0000 UTC m=+102.880553295" watchObservedRunningTime="2024-06-25 16:21:06.220923179 +0000 UTC m=+102.881967021" Jun 25 16:21:07.220000 audit[5222]: NETFILTER_CFG table=filter:130 family=2 entries=8 op=nft_register_rule pid=5222 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:21:07.220000 audit[5222]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc7a3714b0 a2=0 a3=7ffc7a37149c items=0 ppid=2426 pid=5222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:07.220000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:21:07.222000 audit[5222]: NETFILTER_CFG table=nat:131 family=2 entries=58 op=nft_register_chain pid=5222 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 16:21:07.222000 audit[5222]: SYSCALL arch=c000003e syscall=46 success=yes exit=20452 a0=3 a1=7ffc7a3714b0 a2=0 a3=7ffc7a37149c items=0 ppid=2426 pid=5222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:07.222000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 16:21:07.908265 systemd[1]: Started sshd@26-10.0.0.61:22-10.0.0.1:49622.service - OpenSSH per-connection server daemon (10.0.0.1:49622). Jun 25 16:21:07.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.61:22-10.0.0.1:49622 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:21:07.912400 kernel: kauditd_printk_skb: 43 callbacks suppressed Jun 25 16:21:07.912505 kernel: audit: type=1130 audit(1719332467.907:810): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.61:22-10.0.0.1:49622 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 16:21:07.940000 audit[5226]: USER_ACCT pid=5226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:21:07.941860 sshd[5226]: Accepted publickey for core from 10.0.0.1 port 49622 ssh2: RSA SHA256:QDE0oejU1GTflMiJVy+ZxRnTmHRZjwF0DwOtXwGp39I Jun 25 16:21:07.942988 sshd[5226]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 16:21:07.941000 audit[5226]: CRED_ACQ pid=5226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:21:07.947238 systemd-logind[1271]: New session 27 of user core. Jun 25 16:21:07.948315 kernel: audit: type=1101 audit(1719332467.940:811): pid=5226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:21:07.948381 kernel: audit: type=1103 audit(1719332467.941:812): pid=5226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:21:07.948405 kernel: audit: type=1006 audit(1719332467.941:813): pid=5226 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jun 25 16:21:07.950190 kernel: audit: type=1300 audit(1719332467.941:813): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf12a8ec0 a2=3 a3=7fec1247c480 items=0 ppid=1 pid=5226 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:07.941000 audit[5226]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf12a8ec0 a2=3 a3=7fec1247c480 items=0 ppid=1 pid=5226 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 16:21:07.941000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 16:21:07.954494 kernel: audit: type=1327 audit(1719332467.941:813): proctitle=737368643A20636F7265205B707269765D Jun 25 16:21:07.966516 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 25 16:21:07.970000 audit[5226]: USER_START pid=5226 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:21:07.970000 audit[5228]: CRED_ACQ pid=5228 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:21:07.977802 kernel: audit: type=1105 audit(1719332467.970:814): pid=5226 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:21:07.977851 kernel: audit: type=1103 audit(1719332467.970:815): pid=5228 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:21:08.188920 sshd[5226]: pam_unix(sshd:session): session closed for user core Jun 25 16:21:08.188000 audit[5226]: USER_END pid=5226 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:21:08.191973 systemd[1]: sshd@26-10.0.0.61:22-10.0.0.1:49622.service: Deactivated successfully. Jun 25 16:21:08.192953 systemd[1]: session-27.scope: Deactivated successfully. Jun 25 16:21:08.193664 systemd-logind[1271]: Session 27 logged out. Waiting for processes to exit. Jun 25 16:21:08.194527 systemd-logind[1271]: Removed session 27. Jun 25 16:21:08.189000 audit[5226]: CRED_DISP pid=5226 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:21:08.212668 kernel: audit: type=1106 audit(1719332468.188:816): pid=5226 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:21:08.212817 kernel: audit: type=1104 audit(1719332468.189:817): pid=5226 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jun 25 16:21:08.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.61:22-10.0.0.1:49622 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'